Click for Full Transcript

Intro 0:01

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.

Jodi Daniels 0:21

Hi. Jodi Daniels, here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.

Justin Daniels 0:35

Hi, I’m Justin Daniels. I am a shareholder and corporate M&A and Teck transaction lawyer at the law firm, Baker Donaldson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cyber security risk. And when needed, I lead the legal cyber data breach response brigade.

Jodi Daniels 0:58

And this episode is brought to you by Red Clover Advisors. We help companies to comply with data privacy laws of established customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e-commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business. Together, we’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best selling book, data reimagined, building trust one bite at a time, visit redcloveradvisors.com. Well, hello, hello. You’re very excited for a Friday recording time.

Justin Daniels 1:36

I’m just getting the vibe from your positiveness.

Jodi Daniels 1:43

That’s because it was fall weather this morning, after it was 85 here the other day to wake up and it’d be 55 was

Justin Daniels 1:50

awesome. Yes, I noticed you’re not complaining about the HVAC today.

Jodi Daniels 1:55

That’s we’ll have an entire conversation if anyone here has to replace their HVAC. I had a lot of thoughts about the options on the market that we probably should talk about privacy instead and security and Nai. So today we have Eric Greenberg, who is the Executive Vice President, General Counsel and corporate secretary of Cox Media Group, a multi platform media company based in Atlanta that serves major US media markets and CMG Cox Media Group is a portfolio company of the private equity firm Apollo Global Management. Eric, we are so excited that you are here with us today.

Eric Greenberg 2:30

Thanks for having me. I’m excited. I know both of you separately. I’m excited to be here with both of you.

Jodi Daniels 2:36

Let’s see which one he likes better.

Justin Daniels 2:38

And Eric, I already, if you don’t know the answer to that,

Eric Greenberg 2:44

I’m gonna will that. Will edit that you know

Justin Daniels 2:48

exactly who the better podcast is, just like when I speak with her, I tell them up front when you fill out the survey form, you know who the better speaker was.

Eric Greenberg 2:57

I will keep that note to self. Yes.

Justin Daniels 3:00

Thank you. That’s your turn. Thank you for that. How many of these have we done? 500

Jodi Daniels 3:07

we’re like 250

Justin Daniels 3:09

or something. Okay, Eric, can you tell us a little bit about your career journey?

Eric Greenberg 3:14

Sure. So I began my career in private practice. I’ve practiced in Washington, DC and really focusing on deal work. And because I was in Washington, I really focused on deal work in regulated spaces, particularly media and technology. And then about four years ago, I left private practice to become GC of CMG. It’s my first stint as a GC, but I knew the industry and the regulatory regime well, so it was a good fit, but also a lot of new things, and I’ve loved it and glad to be here.

Justin Daniels 3:51

So I want to ask you, actually a follow up question, because I am also a fellow deal lawyer, and given that our topic today is AI, kind of from a legal and legal industry standpoint. Could you talk to us a little bit about why being an M&A or A deal lawyer is so helpful in understanding AI, which is such a connect the dot type of issue across so many disciplines in the law? Well, it’s interesting. I think

Eric Greenberg 4:21

M&A is really one of the entry points for legal AI tools, because its ability to streamline due diligence and to translate vast stores of documents into due diligence reports, schedules, to match them up against the reps, there’s a tremendous amount of efficiency there. But I do think the broader issue is in in understanding what the risk profile in a deal is, what the negotiating posture of the other side is. I think there’s a whole other level where you’re using the pattern recognition skills, not just to sort. Understand the universe of documents, but to understand, sort of the universe in in which the the deal is happening. And I I think people aren’t quite there yet. I’ve sort of dipped my toe in that in a couple of deals, sort of understand what the mentality of the other side was, or to think about strategies. And I think that’s one of the really exciting and more far reaching areas.

Jodi Daniels 5:26

Eric, you recently put out your last article in a multi part series on Bloomberg Law about general counsels and AI and the impact of it on the legal profession and the future. And I was hoping you could start by sharing a little bit about the series and the articles and sort of your your thoughts on this future

Eric Greenberg 5:48

shift. So it’s hard. It’s particularly this year, it’s been hard to avoid AI, whether you’re looking at LinkedIn or you’re reading more broadly and in our own work. And, you know, we’re, we’re all you know, hearing about demos for this technology and that and and I was on a long flight, and really just started to think about different elements of of the topic and what the the ramifications were. And that kept turning into yet another issue and another issue. And I began to just write. And wrote something very dense, very kind of multi perspective, sent it to an editor I know at Bloomberg, and basically, like, this is too long, but if there’s a there, there, let’s talk about it. And she came back and said, there is, it is too long, but maybe we can turn it into a series. And that also became a really fascinating challenge about how do you break this topic up so that you’re meaningfully covering the waterfront, but also in ways that are digestible? And I think the fact that we put this out sort of one article at a time, a week at a time, I hope create an opportunity for people to see different issues from different perspectives and digest that and then approach it another way. So it’s been was a really fascinating project, the feedback and meeting folks like you in the process and engaging in it has been really fascinating. And as I’ve mentioned when we spoke earlier, I really ended up having to rewrite some of the content towards the end, because I was myself learning and evolving in my own thinking. I wrote something on AI earlier in the year on Bloomberg, and I have to say it feels quite quaint now that we’re in October, my own thinking has evolved a lot, and I expect it will for all of us, but that’s I hope it provides a good snapshot in time of where we are now, what the issues are, and I put myself out there making some predictions, and who knows, but I think it’s a useful exercise to think about this.

Jodi Daniels 8:10

I also love that you can recognize and appreciate the views might change and shift, and I see that in our work and when I’m talking with people as well, what we know now and what you might know in the future is going to change, so that continuous learning and being open minded, I think, is really important.

Eric Greenberg 8:29

And I think ironically, it’s one of the great attributes of AI, is as a thought partner and to think incrementally and iteratively. And if you’re doing that, you’re going to end up in a place different than where you started or where even you thought you were going. And that’s very much been the case for me in writing about AI.

Justin Daniels 8:54

So I know in your series, which I’ve read, several of the articles, you explore how AI transformed the legal practices. So maybe share with us a little bit from your perspective about the ethical or professional risks you foresee as more legal work becomes automated. You know, in short, how should GCS and legal teams prepare for this? See change in how lawyers deliver value,

Eric Greenberg 9:21

I think at the end of the day, our bread and butter as lawyers is our judgment, and there are a lot of components to that, not only our legal training, but our ability to our training, to identify material issues, to ask the right questions, To explore an area we’re unfamiliar with, and through logic and material questioning what the material issue is and sort of taking apart an issue, that’s really what we do as lawyers, and I think we’re going to have to continue to do that. What AI gives us is yes, some efficiency in doing some. Some tasks faster, and I think that’s a huge emphasis right now. I do not think in the long run, that’s really the game changer. I think the game changer is the ability to use it as as an iterative thought partner. But in either of those cases, we’re going to have to apply our judgment to what the outputs are. And so you know, just as you know when, when I was coming up, you could go online and search for a case using key words that were within so many words of one another, and think you found a case very much on point. Still had to read it, still had to think about whether it really applied. You had to shepherdize it, make sure it make sure it was still good law. I think hallucinations are simply just another variation on that. It’s going to get us to interesting material faster. It’s going to get us to potential answers faster, but we’ve still got to apply our judgment to that, and even on the thought partnership piece. I don’t think this is like, you know, the bat computer, where you put in a question and it spits out on a computer tape what the answer is it’s going to spit out things. Those outputs need to be inputs for us, and we need to treat them that way, not that the answer has been issued, but that an input for us to analyze and discern and think about and potentially discard, and I think often build on, is really the task before us. And I think that’s going to be the distinction between irresponsible use of the technology and not just responsible use, but really strategic use of it. And I think the irresponsible use is also going to be really non strategic as well. So I’m hopeful that that divide leads people to be on I think the more responsible, and what I think is the more creative side of the equation.

Jodi Daniels 12:00

I have a follow up. But Justin, you have your pensive thinking look on.

Justin Daniels 12:05

I’m I’m going to pass the con to you. Okay,

Jodi Daniels 12:09

then, Eric, I agree with what you’ve said, especially in terms of the value of asking those really strategic questions. What comes to mind for me is, I think so many non legal executives see the efficiency piece, yeah, and their view is, well, if I can get that from these tools, let me move fast and forward. There. What are your thoughts to help legal professionals and teams educate and reminds here’s why just that alone won’t be enough.

Eric Greenberg 12:46

I think there are always business executives who don’t appreciate what lawyers do, who see us sort of as a passive reservoir of expertise to be, you know, to open the floodgate when needed and only then. And I think those that framework will also lead people to say, Well, why can’t I just, you know, look this up where, let’s hire a first year who has, you know, good prompt engineering skills. That’s, that’s, that’s the whole legal department. And I look, I think there will always be that mindset in certain quarters. I do think not only is that naive, I think it’s becoming quickly antiquated. And I think the real opportunity for GCS and for the business community and for corporate America is that the GCS role is becoming increasingly strategic. And I think the the real substantive view is not that AI can produce the right answers quickly and dispense with lawyers, but that it actually allows lawyers to be even more strategic. I had thought earlier in the year that the strategic aspect was that AI would free up more time for us by doing a lot of tasks more efficiently and quickly, we would now have more time to be strategic. My thought has evolved that AI is part of the strategy. It’s not that it frees us up that we’re able to make dinner faster and read a book. It’s that it’s actually going to accelerate our own ability to be strategic, and I think ultimately it will elevate General Counsel. I think if you look at the finance department, the technology is already there and well developed. They’ve got their Excel spreadsheets. They’re doing complicated financial structuring. I think the real innovation now in 2025 and forward, is that large language models, which are really addressable to lawyers who deal language is the coin of our realm that we now have a really potent technology to help us be interdisciplinary, to be strategic, to pull together diffuse and sometimes contradictory information we do. That better than anybody, and I, I’m hopeful that AI is actually going to enable us and elevate us in that in the C suite, rather than the reverse.

Justin Daniels 15:11

So Eric on, on that point I’ve done about, I don’t know, five or six workshops within House Counsel, around prompting, because that’s how you interact with the LLM. And what I’ve seen is is you have in house attorneys who are all over the place. Some of them are very savvy with it. Many are dipping a toe in the water or trying to figure out, because the key to really unlocking the power of AI is the prompting and understanding the nuance. And so I guess my question is, it sounds to me, what’s underpinning your thought around how this evolves into a really strategic use, is that most in house attorneys will really develop a strength and the skill of prompting, or whatever it evolves into next in how you engage with the AI, because prompting is an art. I’ve had to learn a lot about how to prompt well, and how to, you know, AI likes to be friendly and give you what you want. You have to be hard on it to a lot of times get what you want and parse through the hallucination. So I’d love to get your take around that.

Eric Greenberg 16:20

Well, I think it’s a really great and multi layered question, because I think one of the things we reflexively think is it’s all about speed and doing things faster. And part of what’s embedded in your question is you have to put in a lot of work into the prompt for it to be really effective and to get the most out of it. It can’t just be and I do this sometimes, you know, if I’m personally using a tool, I’m just lazy and say, What about x? And who knows what answer you get. But when we’re using it, to be really strategic. The more information the tool has, the more useful it’s going to be. But we also have to really guide it the second piece of it, and when you focus on prompt engineering, I fully agree, and I think that’s frankly an opportunity for younger lawyers. I think they by dint of their own facility with technology and perhaps a certain fearlessness that older people, particularly older lawyers, may not, have, I think they are going to get far and fast in prompt engineering, but what they may lack, and what’s essential is the judgment, seasoning and experience to do something interesting with that output. And what I envision, perhaps with more hope than prognostication, but I definitely see it in my mind’s eye, is the potential for young lawyers and more seasoned lawyers to have very rich partnerships where a younger lawyer, rather than sort of being a passive observer of a litigation or of a deal, where I’m doing the schedules, I read The documents. This is my introduction to the deal process, which you know, that’s how you and I learned to do M and A I think there’s an opportunity for younger lawyers to be even more engaged in the strategy, but they’re going to need to be partners with with more senior lawyers. And I suspect senior lawyers are going to need younger lawyers who are really fearless and facile with AI to be partners with them, and I think that creates a great opportunity for truly strategic and substantive mentorships. Obviously, the conventional wisdom is that it could displace training, and that people are going to, you know, be sitting in a remote corner with an AI tool and not getting training, and I think that’s a risk, but I also think there’s a huge and very rich opportunity for prompt engineering to be a real tool to make lawyers on an intergenerational basis, collaborators,

Justin Daniels 18:57

well, kind of in a similar vein, and now I’m talking business models, is I use AI on deals and all the places where I think it makes sense, and I try to do it proactively, because I don’t want someone such as yourself to come to me and say, Hey, Justin, you know, what are you doing as my outside counsel to, you know, integrate AI into Your practice. And so I’m just from my own outside lawyers perspective. You know, what is your thought around the expectation you have around the outside lawyers that you use in terms of how they bring AI to bear and how that may impact what you’re willing to pay for outside legal

Eric Greenberg 19:37

services. So I think the threshold issue is about efficiency and cost. And I think everybody is there that AI ought to enable people to do a lot of things faster, and that it ought to change the value proposition. I certainly am intrigued by AI enabled firms, UDIA, just announced that it. Had a law firm offering that, in all likelihood, can do due diligence in a deal and generate due diligence reports faster and at a much lower cost. And I certainly am seeing fellow GCS looking to outside counsel saying, I expect to pay less for this. And I think law firms that are touting their AI capabilities without translating that into savings or some value proposition are on the wrong side of history, if you will. Having said all of that, I do think that the longer term potential is for AI to be a tool for collaboration. And I think that some of that the real opportunity is for the AI tools that are being used by the law firms to be interoperable with the AI tools that we’re using in house. And I think, frankly, whoever figures that out, rather than focusing on big law or on the in house market, but sees collaboration between the two. Is the market. I think that’s a huge opportunity, and that’s where I’m looking in the long run, is if I’m doing a deal or I’m doing a litigation, can we be collaborating and creating a universe of knowledge that we both have access to that we’re both contributing to, and that we together can take those outputs, because I was saying before you want, you want to treat the output from the AI tool as an input to you for your judgment, all the better to be doing that with your outside counsel, with their experience. You know, they may have vaster deal experience and know the market we, as in house, know our companies and our industries better. I think AI becomes an incredible tool for collaboration. And that’s really where I would like to see it go. What that value proposition looks like in terms of fees, I think it’s a lot less clear in that context, but I think we do start to approach a more value based model, rather than saying however long it took to do it is, as is a crude approximation of value. And so that’s what we’ll do. I think the billable model has been highly resilient, and it’s sort of like the dinosaurs, everybody you know announces extinction, and that they hang on for a very long time. But I do think the billable hour, in a world where the the collaboration is more dynamic, the efficiencies are more pronounced, is going to be under higher pressure, and I think we’re going to move closer to something that is perhaps still influenced by the hourly rate, but is going to be much more rounded by notions of value.

Jodi Daniels 22:55

One of the pieces, Justin, you talked about prompting, Eric, you’re talking about value and the ability to jointly collaborate, to make any of that work, is all about the data that is there. So if we think about, and you talk a lot about this in the series, embedding institutional knowledge in AI systems, if you could, and this is especially true in the collaboration, who moves on, what happens? So that information is still there, but again, that input in really helps dictate, plus a quality prompt, what kind of output you’re going to get. Can you talk to us a little bit about the risks and trade offs of kind of this concept of, like a corporate legal memory and in your thoughts there,

Eric Greenberg 23:38

I think one of the opportunities, as with many things, is also the risk the opportunity is having, I think, increased sort of uniformity, cogency, really, in terms of how we think about deal making, risk our advice by having everybody drawing on A common body of knowledge in the series, I quoted another GC saying, you know, AI has a lot of risks. There’s also a risk by having a lot of lawyers in your department all doing their own thing. And I think this kind of corporate mind, dynamic brain creates the potential for there to be a lot more coherence in the way we attack issues, that we execute deals, prosecute a litigation or defend it. I think it also creates a lot of opportunity for the GC to have influence. I mean, I became GC, I put my forms online and encourage people to use them. This sort of takes that to the nth to be able to really influence this dynamic system of knowledge that is going to influence how we do these various tasks. The risk Jodi, I think, is people sort of rely on that in a very passive way, rather than seeing it as dynamic and engaging with it, particularly in the do. Real World, we’re very precedent driven. Everybody wants to start with a form. Somebody must have thought about this, and I’m going to start with that form. And look we all see in the world people who are over reliant on their forms. Young Lawyers are often overly reliant on their forms. Corporations in house counsel. I can’t, I can’t change this word because the semi colon was approved by corporate and people who sort of use it as well. There’s my form is market. This is more. I’m not going to move off of it. That’s still going to be a risk. And there’s a risk, I think that people sound I’m going to use this brain as a forms file. And I don’t have to think a lot about this, because I have the answer, and I think that misses the point, because I think the real value is that this thing is dynamic and continue to evolve. And so just as you know, we talk about our living, breathing constitution, it’s only as good as our ability to sort of apply it to new circumstances. I think that’s what it’s going to behoove us as users of this technology, is to be really engaged with it as a starting point as a but we’re going to have to adapt a risk profile from another deal into a different context. And I think there’s a risk that it makes people lazy and people’s thought process starts to atrophy. And that’s always been a risk. In the 1950s rod Ray Bradbury wrote about, you know, people not reading and sort of just becoming incredibly superficial in their thought processes. 75 years later, that risk is still here, just based on new technology and new medium. And I think that’s true for lawyers as much as anybody else, but it also, I think, becomes an incredible tool for thinking strategically and seeing patterns, risks, theories of a case that might not have been readily discernible to us. Now we still have to apply our judgment. And frankly, our judgment may not even be better than the tools, but at the end of the day, we have to be the ones controlling that output, and to the degree that people hand over the keys to you know, HAL to control who closes the pod bay doors in 2001 that’s, that’s, was a risk, then it’s a risk. Now that

Jodi Daniels 27:28

makes a lot of sense. I have lots of other thoughts that come to mind. I know we were talking about legal forms, but the only thing I could think of were the health forms, where they still have the information from 20 years ago that you don’t even need anymore, and they’re just still stuck on their form because someone approved the form at that time.

Eric Greenberg 27:42

And I’ll take it one step further. Jodi, you then have to fax. It drives me crazy. You have to fill out these forms, and then you have to fax them to them. I don’t know why doctors offices, they have, like, MRI machines and a fax machine. It doesn’t, I don’t get that. But yes, that’s a different

Jodi Daniels 27:59

project, different conversation bar. You’re like, you’re Are you sure you don’t have a law degree? I don’t have a law degree. I did like the law library a lot. I studied there, right across from where I lived, and crossed from the business school was the best library on campus.

Justin Daniels 28:17

So Eric, we touched about this, and I’ve talked about this a lot in writing and on this show, is, in my personal opinion, the number one cybersecurity threat with AI right now is deep fakes. And I’d love to get your perspective, because I’ve come around to the view, and I recommend this to clients, particularly publicly traded ones, that you need to have a very specific response plan for a deep fake, because your response is going to be in minutes and hours, not days. And love to get your thoughts about how you think about well, now what if you know this deep fake appears of our CEO saying stuff in the media, no doubt that are completely false, but yet it goes viral, and now we’re we’re flat footed.

Eric Greenberg 29:06

So I think, you know, I think your question reflects two really fundamental sides of the same coin. One is the the security piece of how do we manage this? And the others a communications piece and a reputation piece, and the two are certainly integrated. I think the thing that I worry about is that we’re just always playing catch up, and I think in the way we use AI, I’ve talked about it as a strategic thought partner, I think we can be really proactive, and I think that’s really forward facing. I do think on the threat piece, it always feels like we’re at least a half step behind, because the people who use it for bad purposes are highly motivated and highly talented in how to exploit the technology. And by the way. Can imagine a world where the AI advises bad actors on you know what? What are the ways to get, you know, somebody to turn over their credit card information as quickly as possible, or or a CFO to treasure, or a treasurer to wire money? And I don’t have a good answer for that, other than we continue to be vigilant and just, you know, make sure that we’re never more than a half step behind on that. But I think it’s the magnitude and the scope of the risk has certainly changed geometrically based on these technologies.

Jodi Daniels 30:39

So Eric, I would love to just ask, we always ask our personal privacy and security tip, and we’re still going to ask that, but we have a lot of attorneys who listen to this, and if you could tell them one action item today, not a personal privacy or security tip, but just based on our conversation, what would you recommend that they do after listening to this conversation? I

Eric Greenberg 31:04

would say, Take so a lot of people will take an email they’ve drafted or a memo and put it into a tool and say, help polish this up. I would take it a step further and say, take an issue you’re wrestling with and put that into an AI tool, and to the point we were discussing before, about prompt engineering, give it a lot of context, share your thoughts, share what you’re worried about, what are you trying to figure out, and see what you get back. And I think that’s that could be a really engaging and exciting first step. I think people don’t know how to use it, or afraid to use it, or have overestimated the complexity of prompt engineering. I would say start. And I write about this in one of the articles. We had a mediation we were preparing for, and I said to our outside counsel, let’s just put everything into Harvey and see what it says. And you know, the motions, the mediation memo, and then just stream of consciousness. Here’s what we think is going to happen, here’s what we’re worried they’re going to say, here’s what we’re trying to get out of it. And we didn’t get some eureka moment out of that. I just want to be clear, it wasn’t we didn’t slap our foreheads in amazement, but what we got was an extremely thoughtful set of insights that shaped our strategy and how we thought about it. And I think that’s really the potential of AI, and I think people that think that, you know, it’s going to become managing partner of the firm, and there won’t be any lawyers anymore. I’m not worried about that, but I think what we should worry about is missing out on a really valuable endpoint, so input. So one of the articles is called the third voice in the room, and I think that’s the way we ought to be seeing AI, we constantly talk about the value of having diverse perspectives around the table when we’re when we have a meeting, when we’re strategizing, I always say if, if one of the people you could invite to the meeting had access to all human knowledge, when you want them in the meeting and be curious what they had to say, doesn’t mean you listen to them. Doesn’t mean you wouldn’t say that’s an interesting point. That’s not quite it, which, by the way, we all say now, but there’s always that person in the meeting who’s got some wild I was saying, okay, turns out there are dumb ideas. I think engaging with AI as a thought partner, as a third voice in the room is a great place to start. And once I think, you start to see the richness of the inputs and the pattern recognition, excuse me, recognitions that it sees that you may not, it becomes really quite exciting as an as as a tool that amplifies our our decision making process doesn’t replace it, but it amplifies it.

Jodi Daniels 33:57

Now you have to answer what your best personal privacy or security tip is, you have to give two tips on our show today.

Eric Greenberg 34:03

So it is the ultimate analog solution, and it is on our team, we have a code word. It’s it’s like pomegranate. It’s not pomegranate, but this is like pomegranate. It is a fruit, and we use that as a way of validating our communications. So it’s a way of me signaling that something’s really important, and I need everybody to read it right away. It’s also when there’s a security threat, because who doesn’t wonder whether the email saying there’s a security threat is itself a security threat, and so when I send an email that hat, or anybody else on the team sends an email with that word in the subject line, we know that it’s legitimate, and somebody might one day learn what that word is, and we’re screwed. But for the moment, it is a really, really terrific. Analog way of validating our communications. And there’s some something even just kind of fun about it, and sort of, in some ways, at least, feeling like maybe you’ve outsmarted technology by going as far in the other direction as possible. So it works. It works. I like it. So what do you like to do for fun when you’re not thinking and writing about AI? I love reading. I am a an irresponsible and promiscuous buyer of books. They’re scattered around my house, many unopened. I just really enjoy reading. And I enjoy reading online. I enjoy magazines a lot, but I’m really just, like I said, an irresponsible book buyer. And my other thing is I love going out to eat, and I believe that a tasting menu, a multi course tasting menu, with wine pairings, is the height of civilization, and that’s the pinnacle of our civilized world, is to have some fabulous multi course meal prepared by a chef with a Somalia who comes and tells you what wines have been perfectly prepared. Paired with each course, and if you’re sitting with your spouse, your partner, your friends having a great conversation. To me, that’s the whole thing. To me, it’s my That

Jodi Daniels 36:33

sounds amazing. And I love magazines too. Actually, having worked at Cox for very, very very long time, I have strong appreciation for the printed concept. There’s a space for digital, but I actually much prefer a magazine so I can flip i

Eric Greenberg 36:49

see i There is something the glossy magazine I used to when I started out in New York and you had a lunch hour, I would go to a news stand and read magazines, particularly foreign magazines that are, you know, often didn’t know the language, but the photographs were fabulous and glossy and and, of course, in a classic New York moment, I was doing that in the guy who ran The shop said, Hey, we’re not a library. So but some of those magazines are expensive, so you know that was the workaround,

Jodi Daniels 37:27

amazing. Well, Eric, people would like to connect with you. Where is the best place for them to go?

Eric Greenberg 37:32

Eric.Greenberg@cmg.com, we’re on LinkedIn. I’m certainly active on LinkedIn, and you can DM me, and I look forward to connecting with people. One of the great things about this series, including the two of you, has been engaging with with people in really kind of rich ways.

Jodi Daniels 37:57

So I welcome it well. Thank you so much. We are delighted that you were able to join us today.

Eric Greenberg 38:02

I was delighted to do it, and thanks for having me.

Outro 38:08

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.