Click for Full Transcript

Intro 0:01

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.

Jodi Daniels 0:21

Hi, Jodi Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.

Justin Daniels 0:36

Hi, I am Justin Daniels, I am a shareholder and corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.

Jodi Daniels 0:59

And this episode is brought to you by Red Clover Advisors. Thanks. I was about to actually compliment you for not interrupting me again, and then so much for that. In case you were wondering, we read these live. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. Well, hello, hello, interrupter, beeper.

Justin Daniels 1:43

Find it more entertaining that way.

Jodi Daniels 1:47

You know that phrase works because at our dinner table with our kids, we’ve come up with the phrase interrupting chicken, and then we insert all the other animals that we can come up with. So now I can just say interrupting beeper. Okay, okay, then. Well, today we have Mason Clutter, who is a partner and privacy lead at Frost Brown Todd and previously served as chief privacy officer to the US Department of Homeland Security, Mason’s practice is at the intersection of privacy, security and technology, and she works with clients to operationalize privacy, a favorite word, and security to help them achieve their goals and build and maintain trust with their clients, also a favorite of mine. So welcome to the show.

Mason Clutter 2:26

Thank you. Jodi and Justin, so excited to be here. Is what they say. Long time listener, first time caller. So to speak, so excited to be here.

Jodi Daniels 2:34

Oh well, we’re so excited. Thank you so much for supporting the show. It’s hard to believe how many years we’ve been doing this? It’s over four. Is it not? Yes, we’re closing in on five years this fall, five years of fun. Enjoy.

Mason Clutter 2:49

Congratulations.

Justin Daniels 2:50

I want our listeners, when they comment, to come in and say, Justin should get a pay raise.

Jodi Daniels 2:56

I don’t get paid for this podcast, so I’ll buy you lunch. Okay, off you go.

Justin Daniels 3:02

Well, let’s, let’s focus on me.

Jodi Daniels 3:05

Yes, I know. Off you go. Move along.

Justin Daniels 3:07

So Mason, can you share with us your really interesting career journey?

Mason Clutter 3:13

Absolutely. So my career path has been anything but a straight line. I have really zigged and zagged all over the place. So I started out, of course, as a traditional lawyer in Orlando, Florida, doing insurance defense work. I came to DC to get my LLM to specialize in civil rights, civil liberties and constitutional rights, and then found myself doing national security work. So I was supposed to be in DC for one year, 17 years later, I’m coming to you from DC, because I was operating in that unique national security space really focused on detention and trial of terrorism suspects at Guantanamo Bay. So for free, a few years, I went down and observed military commission trials there and supported defense lawyers. But then I found my way into the federal government, serving at the privacy and civil liberties oversight board, so really doing some of the same work on inside of the government with a security clearance, trying to affect change there, and then ultimately found myself, very fortunately, at the Department of Homeland Security in the last administration, serving as the chief privacy officer. So that’s really my government work. Is where I’ve learned to advise executives on how to operationalize privacy, which is now what I do every day in the private sector.

Jodi Daniels 4:34

Well, so let’s talk a little bit about what you do every day and helping all these different companies. Because what are you seeing as the privacy and i ai challenges, maybe it’s speaking challenges that you’re seeing most often these days.

Mason Clutter 4:48

So I really am enjoying working with clients to help them achieve their goals. And when I left government and joined the private sector, I, quite frankly, was a little worried that I would miss the mission oriented work. Work of federal government life, but the transition has been seamless, and the work has really been meaningful. So I love meeting our clients, being a small part of their world. You know, people who are starting businesses and designing products really are so brave, innovative and creative, and so I’m really enjoying being a small part of it. But at Frost Brown Todd. We’re more of a mid-market, mid-size firm, and so our clients really do range from startups to some of the household names that you would know. And I’ve been very fortunate in working with many clients across various industries, whether it’s automotive or manufacturing or healthcare technologies, including our startup clients, and those are the clients who really are facing significant resource and therefore compliance challenges. And for these clients, privacy issues, as you know, can range from product design to consumer compliance issues and really even issues on the back end when it’s time to sell their companies. So I’m sure Justin, you encounter some of that from a security perspective and M&A transaction work. So what I’m seeing most frequently is actually not that different from what I saw in government. It’s really how to work with limited resources while ensuring privacy and security by design frameworks are incorporated into product design and compliance programs, and that they don’t become insurmountable issues before a client launches a product or a new service or again when it’s time to sell their business. And so the patchwork of privacy laws in the United States, plus the international compliance challenges obviously do not make this easy, but I personally see a lot of opportunity in the law, and so one of the ways that I advise is that there’s often no one way to comply with the law, so I see this as opportunity for businesses to design their compliance frameworks that are unique to their own business models and the way in which they engage with their clients. So all that said, you know, I know I mentioned earlier Jodi offline that I participated and listened to your presentation on cookie compliance. So that’s certainly something that clients are faced with challenges, all of them, Cookie compliance, the use of automated collection technologies for both big and small clients, as well as what’s coming with respect to Maryland’s new privacy law that comes into effect in October, that is one of the most significant changes we’ve seen in the privacy landscape in a while. So we’re really setting up clients to accomplish their goals in the face of some of these new challenges and restrictions. And then, of course, those sneaky CIPA claims, those California Invasion of Privacy Act litigation claims that can really significantly impact a small business. And so also, when it comes to AI, again, it’s really not that different from what I was seeing when I was at the Department of Homeland Security, when we were working to develop a framework to implement the use of responsible and responsible use of generative AI. So I’ve been speaking a lot about this issue, and it’s really the same questions, it’s how to use it responsibly, given that it’s evolving so quickly, given that it’s the technology is really outpacing specific regulations, so clients are really struggling with this desire to use AI to enhance their own efficiencies and those for their consumers. But often we need to slow down, take a step back, and really try to analyze what they’re trying to achieve and whether AI is the appropriate vehicle to achieve those goals, because we know as AI practitioners that AI is not going to fix your problems, it’s really only going to enhance them. So just being very clear about what your particular use case is, and then also being mindful of those sneaky inputs by vendors of contracts into contracts of use of AI, which really can implicate confidentiality, privacy and security issues. So again, much like compliance with state privacy for state privacy laws AI, compliance has really become a game of whack, a mole as well for us and our clients.

Jodi Daniels 9:18

I know Justin has all kinds of good AI related questions. I want to go and ask a question around Maryland. Is there one or two tips, maybe, or kind of common conversations that you feel you’re having that you might be able to share here to help companies prepare?

Mason Clutter 9:37

So I think what’s interesting about Maryland is they have a very strict data minimization collection requirement. So that is really just a way of saying that individuals are now going to be and companies restricted to only collecting the information that is strictly necessary to provide the services that they’re offering, right? And so Maryland even says that even if an individual grants you consent to collect additional information under our law, you can’t collect it. So it’s really being clear with clients about what type of service they’re offering, making sure that they’re describing that very coherently and clearly to align their data collection practices with the services that they’re providing. Also, Maryland is strictly prohibiting the secondary uses of sensitive personal information, including information about minors. So that’s individuals, as Maryland defines it, under the age of 18, not just as we know it in COPPA or the Children’s Online Privacy Protection Act of children under the age of 12. So again, even if consent would be given by an individual, if you are collecting their sensitive personal information and covered by the Maryland law, you would now be prohibited, or at least in October, starting in October, you would be prohibited from selling that information, as that term is defined by law for marketing purposes or other third party uses. So this is a little bit of a different dynamic from other states, where as long as you are disclosing the purposes for which you are using information and giving individuals the opportunity to opt out, then often you’re in a compliant space. Now you have to be very careful in aligning those minimization practices and those use practices to those that are strictly necessary to provide your services.

Jodi Daniels 11:29

That’s very helpful. Thank you so much. I know people listening will be happy to hear that information distilled so clearly, and probably less happy to know that they need to go update their data inventories.

Mason Clutter 11:44

It can be challenging.

Justin Daniels 11:47

So Mason, I wanted to touch on something that you briefly discussed before we went live, which is when it comes to AI, you generally said, Yeah, I’ve got some concerns from a privacy standpoint, or, more importantly, a confidentiality standpoint, confidentiality standpoint, be it you know, when your clients, it could be customer information that they’re entrusted with, or a law firm are ethical obligations for confidentiality. And I was wondering if you could be a little more specific about the concerns that you have that you’d like to share with our audience.

Mason Clutter 12:20

Yeah, so thank you for this question, Justin, because it really is something I discuss quite a bit with clients. You know, there is this big desire for everyone to demonstrate that they are using AIS again, whether it’s for internal efficiencies or efficiencies for their own customers. But it’s really important that we stop and we ask questions about input data and output data right and some of the potential risks that align with that particular use case. So when we’re talking about input data, as you know, it’s what prompts are you using? What type of information are you including in those prompts to then receive and output a helpful output that you can use to do your business, and we have to be mindful, of course, about things, as you said, like confidential information, privileged information, information that might have intellectual property protections right, and also personal information, private information. And the reason we ask those questions is many fold in that, let’s assume you accidentally include confidential information that you, as the holder of information, have agreed to handle very conservatively and safely. What if you accidentally put that information into a public tool? Are you able to get it back right often? We’re not once it goes into the tool, especially those public tools, we’re not able to control where it goes and who has access to it, and if they’re using it to train your data, using it to train their own models or systems, and the same goes for personal information or attorney client privilege information. That’s why that conversation about what type of model are you using is a very important first step to ensure that you are safeguarding information appropriately. So perhaps you need to use an enterprise model that is designed with safeguards in place to facilitate your own particular use as lawyers. You know, we use AI, of course, in our practice, but we use models that are really designed to safeguard attorney client privilege information and those other types of confidential information. Similarly, that’s how I advise clients, is to be very careful and ensure that the information you’re inputting into a tool is information that can go in there, and that you actually know where it’s going after you press the enter button, and if you have any additional control of it from that point forward.

Justin Daniels 14:45

So Mason, I think another thing, maybe sometimes people who use AI don’t appreciate: So, for example, you have a lot of tools out there that may use one or more LLMs. And so maybe you could talk a little bit of how. How you literally have to really understand the data flow. It may go into the AI tool, but then what is the AI tool doing, and who is it sharing the data with? A lot of times I hear Jodi talk about second, third and fourth party, but I’d love if you could elaborate a little bit on it. May not just be the company that you’re dealing with and you license their AI tool. It could be who’s one or two or three downstream where that data could be going, and you need to understand that in the process to make sure that you are maintaining confidentiality if a use case is going to use customer data, was hope you could articulate that for our audience.

Jodi Daniels 15:34

Yeah, that’s exactly right.

Mason Clutter 15:35

So let’s say you’re using a tool where the information you input can be used to train that underlying model. That means that an individual who is now using that LLM tool, your information can be available to them and show up as output data in their output right? And so this raises not only privacy concerns, but security concerns as well, right questions about how AI can be used to re-identify data. Perhaps you input data that you thought was de-identified or anonymized, but now later on down the line, it can be output and tools can be used to re-identify that data, or even use of AI to enhance security or fraud activities by bad actors. So certainly, from the perspective that you raised Justin from second and third parties, it really is very important to know exactly what the company you are working with is doing with your data, who perhaps they are working with, where that’s being stored, how it’s being used, etc, because these other second, third, fourth parties come into play. And as you intimate it, ultimately, as the individual who was responsible for the data on the front end, you can be held responsible for those secondary, third, fourth uses down the road. So you really do have to do your homework and understand how the technology is working and where your data is going. A lot of our listeners are inside companies who are being given multiple different features and use cases for AI in their organization, and they’re trying to balance the businesses operations and interests and innovation, and there are a lot of privacy and security and legal practitioners listening. How are you working with companies? How do you recommend these folks try and manage this delicate balance. So I’m really glad you asked this question, Jodi, because I feel very strongly about this. In my opinion, innovation and responsible use are not mutually exclusive. So this question often reminds me of the paradigm of, you know, the post 911 discussion about privacy and security and how far the pendulum should swing. But I really do think it’s a false proposition. I think that responsible design and use of AI does not prohibit innovation. In fact, I think it can enhance it. I think that AI carries potential, very real risks and potential very real harms to individuals, and so we really do have to design it and use it responsibly to ensure that the developers, the deployers and the impacted individuals receive all of the benefits of AI without significant harms. And so I also love this question because it brings up the concept of trust, right? I think responsible use can be compliant and legal use as well as trustworthy use when it comes to a business’s engagement with their own customers. So this is something I was particularly mindful about when I was managing privacy for the Department of Homeland Security and with AI today in the private sector, I think that trust privacy and security really can be a part of a business’s business model to help set them apart from competitors in their respective market. So each of us at this point has been a victim or experienced a data security breach, for instance, and I think that consumers are becoming much more savvy about the value of their data and the risks to them if their data is not handled responsibly. And I think they are now seeking out businesses who do the right thing, right and I think this is often very appealing to the clients with whom I work. It’s I’ve not encountered any one person in the government or in the private sector. So really stepping back and having this conversation of how it can not only benefit their business efficiencies, their own practices, but also help build and maintain trust with their clients, I think that helps them innovate in a responsible way and in a way that can help distinguish themselves in the marketplace as well. Yeah.

Justin Daniels 20:00

Yeah, so Mason, you just said something interesting about your intersection with privacy security. Your work at the Department of Homeland Security. So I wanted to bring up, at least in my view, what I’m most worried about when we start talking about AI, and that’s around deepfakes. And you know, as we sit today, I don’t know how to advise clients, you know, just calling someone to verify wire instructions. deepfakes are quickly making that obsolete. And I just like to get your perspective, having worked at Department of Homeland privacy security, what are we going to do to try to combat this proliferation of deepfakes, particularly if they’re put out on, you know, Tiktok and other places it like weaponizes all kinds of disinformation that could be targeted at false statements by a publicly traded CEO or during a political campaign, or pick any way anywhere where you could sow some mayhem. What are your thoughts?

Mason Clutter 20:59

Oh, absolutely. Justin, I think that is one of the most significant potential harms, both to individuals as well as to national security, for the reasons you just described. So this is something we are certainly thinking about when I was at the Department of Homeland Security, and I’m starting to notice a lot of thinking at the legislative level, right whether at federal level or state level, as it relates to this question as well. So we’re seeing it in the context of regulation around LLMs, right, generative AI in particular. But some states are starting to speak to this, issues like watermarking, other ways of indicating that information has been generated or created by by AI, some of the restrictions that states are putting on some of the very big generative AI developers in the way that they now are training their models and being more transparent about that the way in which they are now required to provide some kind of indicator that if their tool has been used to generate an image or an audio clip or a photograph, whatever it might be that that carries with it, some indicator that that image was created by artificial intelligence. But I think in the same way, there are bad actors when it comes to the, you know, compliance and application of any law, there are going to be individuals who find their ways around this. And so I think it also comes down to education. And the way that we’ve had to educate individuals about cybersecurity, for instance, and basic cyber hygiene, good practices that still today kind of stand the test of time no matter what type of technology we’re talking about. In the same way, we have to start educating ourselves and our children about these very real risks and how to ask questions and how to seek out information to confirm accuracy of what we’re seeing and hearing and what we’re relying on. So I think it’s, you know, in addition to law, it’s that educational component, again, much like in the cybersecurity context 20 years ago, these same types of themes we’re now familiar with, I think, are going to become commonplace in our conversations at Dental at dinner tables with our children as well as in the way we approach online interactions in the future.

Jodi Daniels 20:59

We’ve talked a lot about AI and risks and challenges. What are some of the common misconceptions that companies have when it comes to privacy today?

Mason Clutter 20:59

Yeah, so this is a tough one, and I’m really interested, Jodi, in what you’ve experienced as well from the operational side. But for me, a common misconception is that privacy is one size fits all, and that it’s one and done right? A lot of times, clients come and say, Do you have a privacy policy template? We’ll just throw it up there, and then we’re done with that, right? And you know, we’re going to check all the boxes that we never have to think about it again. And this is when we have to kind of sometimes discuss the complexity of the US, international privacy compliance framework, in the challenges, and again, the benefits that it presents, because we are seeing enforcement, not just at the federal level, by the FTC these days, but at the state level. We now have 19 state comprehensive privacy laws on the books, four of which still haven’t come into effect, and state’s attorney generals are starting to flex their muscles and really start enforcement actions where they’re not seeing the Federal Government Act and so you know, clients understand that data is incredibly valuable and important to their business, and I think helping them understand that not only do the is there value there, that there’s risk to their individuals, right, and that can increase reputational risk and the trust that they’re building and trying to maintain with their clients. So we want to make sure that they can collect the information they need. Use the data that they need to provide their services while again, maintaining this trust with their clients and, of course, avoiding enforcement. Enforcement, fine, so it’s really not difficult to have that conversation with clients and help them make decisions to kind of do things the right way. I think there’s also a misconception, both in government and I’m learning in the private sector. I was actually really surprised to learn this, that privacy is a checkbox exercise, that you can really wait till the end before you roll out a new product or service or a policy to ask those privacy related questions. And I think that that practice unfortunately gives privacy and privacy professionals a bad rap. And I think this is similarly, you know, in the security community, because we’re often seen as slowing down the process or costing more time or resources, when, in fact, as we as we know as privacy professionals, if we had been included from the beginning, products, services, policies, what have you, would have been designed more efficiently and compliantly. And so this is something I really have been evangelizing with clients and with my colleagues at the firm. I really can’t tell you the number of times, for instance, I have a colleague come to me and say, I have this tech transaction, whether it’s for a software as a service or a cloud provider. Please don’t touch the underlying master service agreement. It’s already been signed, and we needed a data processing agreement yesterday, right? So I need this tomorrow. So I’m really trying to — I see Justin laughing, I imagine this happens to him a lot too. So I’m really trying to explain to my own colleagues as well that it’s very important that we be included from the front end of an idea, a design, you know, new product, a new service, to ensure that we can efficiently provide those resources and guidance.

Jodi Daniels 27:00

Well, I’ll let giggle monster over here share his comments on the contract part, because I want to address Mason what you were talking about earlier in the common misconceptions that we’re seeing no Go ahead. Dirty, yep. No, you go.

Justin Daniels 27:16

Giggles, laughing, because I understand where she’s coming from. I think part of that challenge is they either get them in on the front end, or they do what I’ve done, which is be familiar with the issues enough that you can issue spot and then know when to bring the people in and when there’s still too many commercial attorneys who either wall themselves off, or don’t get involved in that, and then you end up in the situation that Mason is elaborating where they come to you, and they’re like, but I’ll be honest with you, Mason, you know, where I think this needs to head, where I’m thinking about this is going, and looking at the entire vendor procurement process, and say, Okay, how can we start to leverage AI. So when we’re looking at a privacy impact assessment, when they fill out their security questionnaire, we start flagging what those issues might be at that point, so it never gets to the end of the contract process. Because I think, and this is just my personal opinion, that AI presents for these in house and other attorneys to completely re Vamp and reimagine what that vendor procurement process looks like and how to start to eliminate friction in that process, and then people look at legal differently.

Mason Clutter 28:32

I think that’s a great way to think of it. And I actually think you’re on to something. I wouldn’t be surprised if that’s the way we start doing things in the near future. I also think this is related to what you also and to me, to Justin, and what you said prior to recording is that we as lawyers, we have got to continue to educate ourselves. As you said, we cannot be walled off. And when it comes to privacy and security, you know, so many of our clients are in the data business and they don’t understand it. So many are using technology, whether they’re a technology company or not, that can impact privacy and security. And you’re absolutely right in that our colleagues, as legal colleagues, have to understand we’re a critical part of the analysis, pretty much across the board, it should always be on that checklist of issues you’re considering and things that you’re asking about. So I really couldn’t agree anymore.

Jodi Daniels 29:24

Well, what we often see is very much the one and done and the lack of appreciation for how hard some of these things are. You can’t even write a privacy notice if you don’t know the data and what it really takes to get a data inventory up and off the ground. It’s just, it feels like it should be easier. It’s complicated for a variety of different reasons, which then ties to the software part. There are some great tools that are out there today. A lot of times though, people will think, oh, it’s automated. It’s just software. It’s just going to poof, magically work, and then I don’t have to do anything. Or it’s really easy to be able to implement it. Right, and you have to have the right process and design behind implementing whichever software you’re going to be able to do. And then automation doesn’t actually always happen. Some of these are pieces of automation, and you still have to do it for us. That’s what we see time and again, is thinking it’s just going to be easier than it is.

Mason Clutter 30:23

Yeah, and we saw recently, right, that even reliance on some of these automated tools, you it’s a tool you still have to stay on top of your privacy and security practices. You cannot just point to the software and say, Well, I put software in place. You actually have to make sure it’s doing what it’s designed to do, and that you’re still complying with what you’re saying in your privacy policy, that you’re doing right? We saw a big enforcement case out of California directed, I think, with a shot across the bow at one of these tech, you know, tech providers in saying, you know, it’s not enough to just implement the software. So I can certainly understand where that would be a big challenge.

Justin Daniels 31:04

Yep, you’re sure. Alrighty, then so Mason, with all of your years of experience when you’re kind of hanging out and the topic of privacy or security comes up, what is your best tip that you might have for our audience? And pick a privacy or security tip?

Mason Clutter 31:21

This is a great question. I don’t know about you as a security practitioner, but I often come across privacy practitioners who often don’t exercise what we preach, right? It’s so interesting to me where maybe I’m shopping with them at a copy, you know, to get a coffee, and they’re handing over their data. And I was like, What are you doing? So for me, it’s really basic data and security hygiene. Again, I don’t think it’s you. It’s all that unique. We’ve learned how valuable our data is and how to safeguard it, so I only provide the information that’s necessary to receive the service I’m requesting. Right if, if I’m buying something and somebody asked me for my zip code or my phone number, I just say no thank you, because they don’t really need that for me to buy shampoo or whatever it is I’m doing. I also will, though, assess the value of what I’m receiving in exchange for my data and determine if it’s worth it to me. And I think that is kind of a privilege and a luxury of being a privacy attorney. I kind of know what the risks are, and if it’s worth it to me, but sometimes, sometimes it might be, I also don’t click on links that I don’t know, or respond to texts or emails from people I don’t know. And so in fact, this came up for me the other day. I got a random email from an insurance company or somebody who works for an insurance company, saying, we need to come to your condo and assess its value for full replacement value. I never heard of these people. I never heard from my insurance company. So before responding to them, I called my insurance company and I told them what I’d received, and they said, actually, that is real. And the woman on the other end of the line said, You are the first person who’s ever called and asked me about this, and she’d been with the company for 10 years. So are people just letting random people into their home to come in to inspect it? You know, it’s just, it’s odd to me. So this happens also, right? When pharmacies call you and doctors offices call you, they’ll ask if it’s you, and then ask for your birthday. And I just say, respectfully, I’m going to have to call you back, and then I just use the number I have for my doctor’s office or the pharmacy and go through it that way, that way, I know I’m actually talking to the right people. I think some people are worried about hurting people’s feelings or seeming rude. You know, it can be a little annoying to the person on the other end of the phone, but ultimately, I think it’s a really easy way of safeguarding your information. And I don’t really think it’s, you know, that complex. It’s not buying security insurance or subscribing to a monitoring service. It’s really just kind of taking actions on my own.

Justin Daniels 33:49

Well, Mason, it’s interesting in what you said, because people feeling bad or not wanting to seem difficult, that’s a reason why social engineering is so effective.

Mason Clutter 34:01

Exactly, exactly.

Jodi Daniels 34:04

When you are not advising on privacy and AI, what do you like to do for fun?

Mason Clutter 34:11

So being a new partner in a law firm, I kind of forget what that’s like. No, actually, I really love living in DC and experiencing all the city has to offer. You know, just being able to pop in and out of museum and see an exhibit is a really nice thing to do on the weekend. Food and Wine are also my form of entertainment. So I love eating out and trying new restaurants in the city. I also really love the theater. And interestingly, my mother now lives in the city. We live in the same condo complex. My partner actually lives in the same condo conflict, and the three of us each have dogs, so we kind of just like getting together and walking the dogs or taking them out to eat. And then lastly, I have a 13-year-old nephew who really is just the funniest. Smartest and best person. I know he’s my favorite person. So anytime I can spend with him, you know, nothing makes me happier. So I just had a visit with him recently, and then, obviously, of course, all things privacy, like participating in a new, you know, friends privacy podcast. So thank you. Thank you again for having me. This has been really fun.

Jodi Daniels 35:21

Well, we’re so glad that you joined us. Now, if people would like to connect with you and learn more, where should they go?

Mason Clutter 35:27

So you can find me on LinkedIn. It is one of the few social media sites that I use. Also I’m prominent on the Frost Brown Todd website. And I’ll go ahead and give you my work email address too. If that’s easy for you, I’m at Mclutter@fbtlaw.com so please don’t ever hesitate to reach out.

Jodi Daniels 35:50

Well, thank you again for joining us. We really appreciate it.

Mason Clutter 35:53

Thank you.

Intro 35:58

Thanks for listening to the She Said Privacy/He Said Security Podcast, if you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.