Andrew Clearwater is a Partner at Dentons’ Privacy and Cybersecurity Team and a recognized authority in privacy and AI governance. Formerly a founding leader at OneTrust, he oversaw privacy and AI initiatives, contributed to key data protection standards, and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy, and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use.
Here’s a glimpse of what you’ll learn:
- Andrew Clearwater’s career journey in tech, privacy, and AI governance
- The importance of setting goals, aligning business needs, and establishing leadership when building an AI governance program
- How standards like ISO 42001 help companies build consistent AI governance programs
- Why starting with scope and context is essential to understanding AI risk
- Ways trust officers can align privacy, security, and ethics within an organization
- Steps companies can take to structure their AI governance programs
- Andrew’s personal privacy tip
In this episode…
Many companies are diving into AI without first putting governance in place. They often move forward without defined goals, leadership, or alignment across privacy, security, and legal teams. This leads to confusion about how AI is being used, what risks it creates, and how to manage those risks. Without coordination and structure, programs lose momentum, transactions are delayed, and expectations become harder to meet. So how can companies build a responsible AI governance program?
Building effective AI governance programs starts with knowing what’s in use, why it’s in use, what data AI tools and systems collect, the risk it creates, and how to manage it. Standards like ISO 42001 and the NIST AI Risk Management Framework help companies guide this process. ISO 42001 offers the benefit of certification and supports cross-functional consistency, while NIST may be better suited for organizations already using it in related areas. Both frameworks help companies define the scope of AI use cases, understand the risks, and inform policies before jumping into controls. Conducting data inventories and utilizing existing risk management processes are also essential in identifying shadow AI introduced by employees or third-party vendors.
In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Andrew Clearwater, Partner at Dentons, about how companies can build responsible AI governance programs. Andrew explains how standards and legal frameworks support consistent AI governance implementation and how to encourage alignment between privacy, security, legal, and ethics teams. He also outlines the importance of monitoring shadow AI across third-party vendors and practical steps companies can take to effectively structure their AI governance programs.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: info@redcloveradvisors.com
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Andrew Clearwater on LinkedIn
- Dentons
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
Intro 0:00
Welcome to She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.
Jodi Daniels 0:21
Hi. Jodi Daniels, here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:38
Hi. I’m Justin Daniels, I am a shareholder and corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.
Jodi Daniels 0:57
And this episode is brought to you by Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com, well, so today on our podcast, it is apparently wear blue day. Yeah, everyone here is wearing blue.
Andrew Clearwater 1:38
Very observant.
Jodi Daniels 1:41
Ah, you know the details matter. We have a lot of conversations in our house about details, don’t we?
Justin Daniels 1:46
Or someone’s lack of attention into the details? Yes, that’s so it’s not you.
Jodi Daniels 1:52
I am very detail-oriented. That might be a very accurate statement. All right, so today, it’s going to be super fun. We have Andrew Clearwater, who is a partner, partner of Dentons privacy and cybersecurity team and a recognized authority and privacy and AI governance, formally a founding leader at one trust. He oversaw privacy and AI initiatives, contributed to key data protection standards and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use. And we are so excited that you are here with us today.
Andrew Clearwater 2:36
Thanks for having me. Think you’ve had some coffee today.
Jodi Daniels 2:40
Haven’t you been listening? Though, should know that it actually is a half Caf. It is not fully caffeinated.
Justin Daniels 2:49
Wow, I can only imagine what you are on full Caf. If this is half caf?
Jodi Daniels 2:52
You know what? Full caffeine gives me a huge headache.
Justin Daniels 2:54
It’s why I don’t do that. Okay. Well, So Andrew, tell us a little bit about your career journey.
Andrew Clearwater 3:03
Yeah, happy to do it. I feel like I can break it down into just three parts. You know, I kind of started in tech server rooms and Research Computing groups, moved into data policy and eventually went in house and started advising companies now in a law firm, and just sort of thinking back about those kind of early days. I think one thing I’ll call out about my background that I think is interesting now is that I worked in an archeology lab on some of their technology. And you wouldn’t think that that’s, you know, the bridge to privacy, so to speak, but when you look at it, it’s reconstructing cultures in the past, examining what people left behind. And you know, there I was sort of creating databases and inventories and making connections about how someone lived. And now looking back, I’m like, well, that actually is good preparation, right? Like these little data points, these little threads that we can pull together, that you can then turn into stories that kind of reveal something about individuals and how they lived. Is an important part of kind of where I’ve come from and what I think about today. So yeah, kind of an interesting way to get to where I am.
Jodi Daniels 4:13
I don’t think I’ve ever met an archeologist moves to privacy and patents before that one is very fascinating.
Andrew Clearwater 4:19
Expanding your podcast range. I love it.
Jodi Daniels 4:26
So, I mean, let’s take that journey, because you’ve built and led AI governance programs, both in house and now as an advisor. And so we’re kind of curious, what are some of the big lessons that you’ve learned in those programs, really from implementation to strategy, actually, I guess it goes the other way, strategy to implementation.
Andrew Clearwater 4:48
Yeah, there’s a lot. I mean, one I think, is, know why you’re doing the work. It’s really easy to lose momentum if there’s a sort of a goal without a reason, and you know it’s. Sometimes it’s because there’s like, a customer consumer expectation that you’re going to meet, and you can kind of work backward, is how that’s going to be met. Or you can think about, are we trying to protect our reputation? Well, maybe we’ll emphasize certain things we’re going to be transparent about. And in B to B settings, I found that AI governance has become something of, you know, a drag on the transaction process if you don’t have the right materials to communicate. So maybe that’s your endpoint that you’re trying to get to. So one of the things is getting clarity on that, you know, another thing that I feel like is the theme is, you know, AI is largely a technology and not a discipline, and I think that’s making it one of the more challenging things. Well, I mean, we don’t have, like, a chief Internet officer, right, like, or a, you know, crypto king, or something like, we have people who have jobs in privacy, security, ethics and other roles within the company, and they’re picking up their piece of how they can interact with this technology and make the company successful. So I think picking a leader and choosing how to work together actually is a significant portion of becoming successful in this because you have to have a clear story, and you have to have the right group, and we often see privacy, you know, taking a leading role. But it’s not always the case, because this is quite technical work.
Jodi Daniels 6:32
Yeah, I always get asked, you just talked about a leader. I’m always asked who, who should be the leader? What do you think?
Andrew Clearwater 6:42
Yeah, well, it might come down to who has the time and the money. Instead of backing into the perfect skill set. It could be that the person with the bandwidth and the ability to push this and to get the attention and tooling that they need, but I think depending on your company, you should really think about based on your goal, is there going to be a lot of technical knowledge that’s necessary to be successful, or are you trying to sort of navigate standards and possibly the law, in which case that legal background tends to be the one that makes you successful? It’s a good way of looking at it.
Jodi Daniels 7:20
Thanks for sharing.
Andrew Clearwater 7:22
Sure.
Justin Daniels 7:23
So you’re an ISO 42001 lead auditor. How do you see the role of formal standards shaping the future of AI governance in practice, not just what we put on paper?
Andrew Clearwater 7:36
Yeah, we’re in an interesting place with the law right now, because, yes, there are AI specific laws and there are privacy laws that have implications for AI, but we’re we’re living in this space where the technology is going a lot faster than the policy, and so this is where I think it makes a lot of sense for companies to kind of seek refuge in the principles and the standards that have relative stability compared to those, those legal items. So 42001 in particular, I like, because of its compatibility with all their standards that companies may already be leveraging. So if you’re in the security space, or you just sort of review vendors, you frequently see 27001 and that is a management standard for the security program that 65,000 companies around the world have certified to. And what we see with 42001 is that that’s a great starting point that gets your program sort of ready for the work that needs to be done. And so when, when we think about, you know what practically happens when you focus on this standard, or standards in general? I think you focus on the things that matter, which is more about figuring out what you have for technology in place like it emphasizes the inventory for sure, figure out your scope and context, and it brings you into the work of the risk assessment and the gap analysis and the policy making, with all of which maybe the law doesn’t dictate in detail, right? I mean, some of them will, and some of them won’t, but this is practically the work that you should be doing that will make your program successful, regardless of the legal obligations that you’re going to work with. And for me, I like to help with mapping to the law after the fact, but I like to think of the standard as sort of the foundation, and maybe the front end for the business. There’s a certain amount of consistency the business can experience if you say, this is how we’ll do an inventory, this is how we’re going to assess risk, and if that stays relatively stable while you address sort of the edge cases of how you account for, you know, certain requirements the business doesn’t have to be retrained constantly.
Jodi Daniels 9:49
Justin, you work with lots of companies doing kind of similar things. What? What might you offer as well?
Justin Daniels 9:58
Kind of what we talked about. Well, when we are prepping for this show is I find too many companies have gone right past thinking about governance to looking at use cases. And if they have, even in-house department, they don’t know a lot about AI. They haven’t used it. And so you have that gap. And so when I talk to them, and I’d like to get your thoughts on this, Andrew is they might be using, you know, some internal use cases, some customer facing use cases, and they haven’t really thought about, well, what does risk with AI and management look like to our organization? They’re already moving towards use cases and then trying to jerry rig or have some assumption about what those organizational risks are, but they really haven’t thought about it. They just went right to, we getting to do this yesterday, and they start implementing use cases, and kind of, yeah, you know, don’t even look at that step. What do you see, Andrew?
Andrew Clearwater 10:59
I do see that, and it’s, it’s the benefit of the standard. When you read the standard, 42001 it starts with the clauses, and it ends at the appendix. The appendix is where the controls are. Everyone reads it backwards. If you’re in it, or you’re used to implementing things, you like to know the controls, and so they jump to that. But that’s to your point. It’s like getting to the use cases before you know the context of your organization. The clauses are meant to make you zoom out and say, let’s look at the scope, let’s look at the context. Let’s figure out what it is that we’re doing here. And so I do find a lot of value in making sure that you understand that before you get going. And there are ways of even standardizing the ways that you look at risk. I mean, MIT, for example, has a nice sort of inventory, which they’ve broken down into categories that can help you really think through that if you’re not familiar with it. And I think another thing to think about, as you pointed out, people thinking about the risk to their organization, something that happens with AI frequently is you have to look at the risk to the organization and the impact to others outside of it. That’s kind of what brings you into ethics. That’s what brings you into some other things. And 42,001 in particular does risk and impact, which is kind of expanding what you look at outside of your company. And it’s easy to skip over if you, you know, get right to the controls.
Justin Daniels 12:28
So is it fair to say if you compare ISO 42001 with say NIST, AI, RMF, it sounds like you favor ISO 42001 any reason or things that you like about it over NIST or?
Andrew Clearwater 12:44
Yeah, I mean, there’s, there’s a reason to choose both, depending on your context. NIST in particular, if you are already using that in other contexts, let’s say you’re in the security space. You’re using that to measure and manage things. Great. You know, maybe there’s a privacy program in place that’s got a maturity, you know, measurement that’s associated with NIST, that’s a reason to stick with NIST, you know, continue to use the sort of the compatible management programs, because the name of the game with AI is working together, where I seen it, and also NIST came out before ISO. So there’s, there’s sort of a, you know, getting out ahead advantage that they have. But where I see ISO, you know, having an advantage is that you can certify to it. And since I work with a lot of companies that are really trying to show each other, you know, I’ve been reviewed, I have a certain level of competence in his space that can go a long way. Now we’re still at the very beginnings of companies taking advantage of that certification ability under 42001 so it remains to be seen whether it could grow like it did in the security space or something like that. But I do like the ability to have that.
Jodi Daniels 14:04
One of the things I find so fascinating, especially around AI governance, is it blends privacy and security and trust. We’re seeing a lot more companies have trust in their titles, and you served as a trust officer. And can you share? How did that experience shape your approach when you’re advising companies on AI governance and for someone listening, maybe they don’t have trust officers quite yet, but they can still probably kind of infuse the same concept in what they’re trying to do.
Andrew Clearwater 14:37
Yeah, yeah. That was definitely a unique role. I do see it in other technology companies, but not a lot of them. And when I was interviewed about that role, I often said, it doesn’t have to be a role, and people are like, what? But it doesn’t you know that the idea that trust is a role really is because it. A point of coordination. What you do want to look for is a way to have collective goals across at least a few parts of your organization. And for me, that was looking at privacy, security and ethics, and making sure that when we did objective setting for the year, we were trying to bring those together in a way that said, hey, I can leverage your controls. Hey, that kind of lines up with why I was doing this. Do you want to go in together to make that more successful? And, you know, from an ISO point of view, I even was looking at creating an integrated management system which says, Let’s tie together these topics in a way that kind of reduces duplication and it improves overall performance. So whether you’re following a standard or not, or whether you can put someone in charge of the coordination, I do favor this move towards these functions having some goal setting together. You do get benefits from alignment. You can share controls. You can think about who’s got budget for what? Often the security team’s got a bit more budget. So you can think about, if you’re in one of these different roles, like, hey, you know, how could I leverage something that they might be working on might be a 5% change to what they’re doing, but for you, it’s everything. So another benefit I see in sort of coordinating, you know, whether it has a role at the top or not is you can increase your influence. Sometimes these functions are not by themselves that influential, because it’s hard to see how they’re increasing revenue. Or, you know, having an influence on the organization in a way that allows them to kind of move the direction of things. And I think when you put together your goals in a way that say, Hey, this is how we’re going to increase the speed with which people contract with us, because we’ve got x, y and z in terms of transparency now, and these certifications are going to be useful for this, and that people will kind of react to that in a pretty positive way.
Jodi Daniels 17:02
I really like what you suggested about the budget allocation, a small portion for one group could mean a significant impact for what could help you. And I think that’s — I see that a lot of companies, and I’m really glad you highlighted that. That’s supposed to be your turn.
Justin Daniels 17:23
Was it my chat? No one can compare to your chat in this.
Jodi Daniels 17:27
That was probably not a compliment. Off you go.
Justin Daniels 17:31
Our audience can decide. So, Andrew, as you and I talked about a little bit earlier, you know, for the companies who are saying, Hey, before I start getting into the minutiae of a use case, I need to think about my AI governance journey. Where should they focus first? And, you know, how do you recommend they build internal alignment for that journey?
Andrew Clearwater 17:56
Yeah, yeah. There’s a couple of things that are kind of competing for first place there, but I suppose you need to agree on who’s involved. Um, so forming that governance team in a way that meets your needs from a diverse skills point of view is important, but I do caution people not to go too big too fast. You know, it is great to imagine everyone that could be helpful, but it’s very hard to go quickly when there are a lot of opinions. So you want to look at legal compliance, IT, data science, cybersecurity, maybe some key business units, but just don’t go full out into the business looking for every little thing. You always have the capability to bring someone temporarily, or have a rotating member, or do any there’s lots of flexibility. Nobody told you how to do this, so keep that in mind. And then, you know, ideally a group means a leader. You can’t just be a group. And so hopefully somebody wants to be that leader and has expressed like a vision for what it is that you want to kind of come together on some of the initial tasks. Should have clarity of ownership. Don’t go full racy on day one. You’ll never get anything done, but you can get there later. You can say, all right, this is what all of our roles are and how we’re going to do it. Choosing a framework kind of helps the group. What laws do we care about? What standards do we care about? All of that, but once you’ve sort of solidified a bit of the choices around the governance team, you know, getting a handle on current usage, I think, is, is another one of the big tasks. I know you guys have a big role in kind of helping people with inventories. That’s key. There’s already AI in your organization if you’re setting this up, you know for sure. And as you’re setting it up, even more will come in the door. And you know, you think about like the concept of shadow, it there’s shadow AI, right? People are using it whether you you. Uh, put it through a program or not, so get a handle on that. And, you know, work on policy development. But I do see people kind of get frozen on this. It’s not like you’re the first one to ever think of these principles, right? It’s good to have a good policy, but it’s not good to sit, you know, and for months and months to get the perfect policy. I come from a background of, you know, building products. So you have that concept of MVP. It’s like, minimum viable product. Create something works today. Make it better in three months. Make it better in six months. It’s just this sort of iterative process, which I feel like lawyers are a little less comfortable with. You’re like, why would we create something that’s not good enough? Like, it’s good enough for now, you know, let’s make it, you know, incrementally better, and use what we learn. Like, once we have a bit of an inventory, maybe we learn, hey, we’re higher risk than we thought, or we’re not, you know, take that into account. And then I, you know, often, I think once you have that, you kind of get into the risk and impact assessments and sort of other mechanisms of automatic discovery of new issues in technology, which the security team is quite good at helping you with. And then, long term, you’ve got to continue to monitor, you know, are you going to be the type of company that’s going to technically discover these things? Are you going to look for, you know, training and administrative controls to discover this, is it going to be a combination of both?
Jodi Daniels 21:27
I think that all makes a lot of sense. I loved how you talked about the shadow AI situation. And I recently had a conversation with the company. Their question was, well, how do you do that? Exactly what you just said, it’s around the data inventory. So for companies who are building and I think that’s why there’s so much intersection between AI and privacy, hopefully privacy teams are doing some type of data inventory on an ongoing basis. That’s the right time to start being able to flag what might not have come up along the way for a new project, or something that might have come in from a privacy impact assessment or security risk assessment kind of process. So those privacy and security roles should be great starting places to be able to add in the different AI uses and tools that are happening.
Justin Daniels 22:13
It’s interesting you say that Jodi, because the area I see it the most is if employees download onto their personal device the free version of chat anthropic, pick one. But to me, the real area where companies aren’t thinking about it is it’s their existing vendors, like go use Adobe right now it has now an AI assistant, which is pretty good. But a lot of companies aren’t thinking about the shadow AI that’s going on because of the existing vendors that they have that are now adding AI features that nobody is really thinking about. Oh, employees now have access to that. What? How does that work? What will it do? What to your points? What are we uploading? That? To me is the area that I see the most.
Jodi Daniels 22:53
I agree, and I feel like that’s where I mentioned privacy and security, but one of them should also be a third party risk program. And if I review a vendor today, and I never review it ever again, that’s even if we chopped off AI, they all have new product features that might have collected new data. Or maybe there were three products, and originally we bought product one, but now we bought product two and three, somehow that should come back up. So I feel like a good program that’s going to capture those changes might come through privacy, might come through security, and the other should come through third-party risk.
Andrew Clearwater 23:32
Do you think there’s a chance that this becomes like data transfers, where you know there’s an expectation of alerting the, the company, the customer, that, hey, we’re going to be, you know, making this change in the next 30 days. You can do what you want with that, that information. Or do you think we’re, we’re going to be in a position of constant discovery, right? Like having to kind of re-inventory everything.
Jodi Daniels 24:01
I personally think it’s a little bit of both. And I see that today, I see some of the companies that believe in transparency, they believe in doing the right things. I just got one yesterday from here’s what we’re rolling out. Here’s the new AI features. Here’s what’s by default. Here’s what you can control, very clear and easy for a company to make decisions on. Contrast that when I go and evaluate some other AI tool, and I can barely figure out what they’re doing from a privacy and security point of view, right?
Andrew Clearwater 24:34
Yeah, that’s my view.
Jodi Daniels 24:38
Alright. Andrew, we ask everyone with all of your privacy and security knowledge you’re hanging out with non privacy people. What is your personal privacy or security tip you would offer?
Andrew Clearwater 24:50
Yeah, I don’t know if you’ve had this one before, but I really like using email aliases, so when you’re signing up for things me. Don’t use your personal email, but use something that routes to your personal email. So you know, if you use Apple iCloud, they have that as a feature. If you’re using Gmail, you can add the plus sign and a word after your username. There’s a there’s several ways to do it, depending on the technology, but I like it because it kind of puts you in the driver’s seat of being able to cut off these connections, regardless of whether the unsubscribe function or any of that is going to work, and it also gives you visibility into who’s selling your data. You know, if you start getting email to the alias that you’ve only given to web company from another company, you start to have a sense for which ones may or may not be honoring the terms that you understood they were going to be using your data for. So it’s not a, you know, a huge change in risk, but it’s a sense of control that you get. And yeah, I think there’s enough benefit to try it.
Justin Daniels 25:58
Great idea. So Andrew, if our listeners want to do that, using Gmail as an example, is that something you go into the settings of your Gmail account and you can toggle it on. Is that how that works?
Andrew Clearwater 26:10
Even think you have to do that? I think as you’re putting in your address, you just finish what you normally would put before the app, and then you put the plus sign, and then you put a word. So you know, if you’re signing up for something, sometimes you might use the name of that company or whatever it is that you want to use as your your methodology there.
Justin Daniels 26:30
Oh, so you’re saying just putting in the plus sign instead of the changes to an alias.
Andrew Clearwater 26:34
Yeah, okay, got it. You’re giving me?
Justin Daniels 26:42
No, I feel like I’m being given a look of admonishment.
Jodi Daniels 26:47
We are not Gmail users in our house, if anyone is wondering.
Andrew Clearwater 26:50
But actually not. It’s not just, you know, built into the systems. There are third parties that will do it as well, you know, for example, like Mozilla’s one. But you know, then think about, you know, the third party, and whether you trust them or not, you know. And does that add an additional layer into the mix?
Justin Daniels 27:08
Got it. Well, thank you for that. I appreciate that. So Andrew, when you’re not working on all of your privacy expertise and, yeah, auditing, what do you like to do for fun?
Andrew Clearwater 27:22
Well, you know, something I picked up in the last couple of years as I started to ride my bike again. And each year, I’ve sort of set some, you know, challenges for myself, you know, I live in the northeast, so, you know, I have the advantage of being able to kind of get out to, you know, back roads away from the car. So I’ve been doing that more where it feels safer and it can be a bit more fun. And so a couple weekends I go, I went to a ride called the Ranger and, you know, set on the spare grounds, you know, sort of camped out for the weekend and went out for a loop with a bunch of people. It’s the smallest town you can imagine. That was, you know, just like library, post office, fire station, and yet, they had 90 volunteers out there with T-shirts like pointing the way and blocking off roads and everything. So it’s a cool community to be a part of. And yeah, something that I plan to keep doing.
Jodi Daniels 28:16
You used to bike, Mr. Justin.
Justin Daniels 28:22
Yes, I still have my mountain bike. I just haven’t ridden it very often because I’ve been between pickleball and squash. I don’t have enough time to have that hobby too.
Jodi Daniels 28:32
Work really gets in the way of your racket activities.
Justin Daniels 28:37
Something freeing about being on a bike and the wind hitting your face and riding around.
Jodi Daniels 28:41
It feels good. Yeah, Andrew, we are so grateful that you came to join us today. If people would like to connect, where should they go?
Andrew Clearwater 28:50
I use LinkedIn pretty frequently, so head on over there. Or if you’re just looking for my email, the Denton site has that.
Jodi Daniels 28:57
Amazing. Well, Andrew, thank you again for joining us. We really appreciate it. Thank you so much.
Outro 29:06
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.
Privacy doesn’t have to be complicated.
As privacy experts passionate about trust, we help you define your goals and achieve them. We consider every factor of privacy that impacts your business so you can focus on what you do best.