Jodi and Justin’s Top 5 Must-Haves in Your Company’s AI Policy

Jodi DanielsJodi Daniels is the Founder and CEO of Red Clover Advisors, a boutique data privacy consultancy and one of the few certified Women’s Business Enterprises focused solely on privacy. Since its launch, Red Clover Advisors has helped hundreds of companies create privacy programs, achieve GDPR, CCPA, and US privacy law compliance, and establish a secure online data strategy their customers can count on.

Jodi is a Certified Informational Privacy Professional (CIPP/US) with over 20 years of experience helping a range of businesses — from solopreneurs to multinational companies — in privacy, marketing, strategy, and finance roles. She has worked with numerous companies throughout her corporate career, including Deloitte, The Home Depot, Cox Enterprises, Bank of America, and many more. Jodi is also a national keynote speaker, a member of the Forbes Business Council, and co-host of the She Said Privacy/He Said Security podcast.

Justin DanielsJustin Daniels is a cybersecurity subject matter expert and business attorney who helps his clients implement strategies to better manage and recover from data breaches. As outsourced general counsel for Baker Donelson, Justin advises executives on how to successfully navigate cyber business and legal concerns related to operations, M&A, incident response, and more.

In 2017, Justin founded and led the inaugural Atlanta Cyber Week, where multiple organizations held events that attracted more than 1,000 attendees. Justin is also a TEDx and keynote speaker and co-host of the She Said Privacy/He Said Security podcast with his wife, Jodi.

Available_Black copy
Tunein
Available_Black copy
partner-share-lg
partner-share-lg
partner-share-lg
partner-share-lg
partner-share-lg

Here’s a glimpse of what you’ll learn:

  • Why AI is becoming more prevalent and impactful in our lives and businesses
  • How AI can affect human rights, privacy, fairness, and accountability
  • The top AI policies companies should have in place
  • How to ensure the use of AI is accurate, unbiased, and ethical
  • What are the benefits and challenges of implementing AI policies?
  • How Jodi and Justin can help you create and deploy AI policies for your company

In this episode…

Artificial intelligence is transforming our world in many ways, raising ethical questions about its impact on human rights, privacy, fairness, and accountability. How can we ensure that AI respects our values and principles and does not harm or discriminate against anyone?

AI can be a remarkable tool that can enhance our lives in various domains. However, it also requires responsible and ethical use. Companies that create and deploy AI systems must adopt policies that guarantee that these systems are reliable, transparent, fair, and secure.

In this episode of She Said Privacy/He Said Security Podcast, join Jodi and Justin Daniels as they discuss the key aspects of AI systems. They reveal the essential AI policies companies need to implement to address data collection and use, transparency and accountability, and fairness and unbiasedness.

Resources Mentioned in this episode

Sponsor for this episode…

This episode is brought to you by Red Clover Advisors.

Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.

Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, ecommerce, professional services, and digital media.

To learn more, and to check out their Wall Street Journal best selling book, Data Reimagined: Building Trust One Bite At a Time, visit www.redcloveradvisors.com.

Episode Transcript

Intro 0:01

Welcome to the She Said Privacy/He Said Security Podcast. Like any good marriage we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Jodi Daniels 0:22

Hi, Jodi Daniels here. I’m the Founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified information privacy professional, providing practical privacy advice to overwhelmed companies.

Justin Daniels 0:37

Hello, Justin Daniels here. I am a Corporate M&A and Cybersecurity Guru at my law firm Baker Donelson. I am passionate about helping companies solve complex cyber and privacy challenges during the lifecycle of their business. I am the cyber quarterback, helping clients design and implement cyber plans as well as help them manage and recover from data breaches.

Jodi Daniels 1:02

And this episode is brought to you by Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology, ecommerce, professional services, and digital media. In short, we use data privacy to transform the way companies do business. Together, we’re creating a future where there’s greater trust between companies and consumers. To learn more, and to check out our new best selling book, Data Reimagined: Building Trust One Bite At a Time, visit redcloveradvisors.com. So recently, we put out an episode just you and I Justin on ChatGPT. And it was really, really popular. And we both have been having a number of different conversations lately on AI ChatGPT. And specifically, what type of AI policy that companies need to put in place, what we thought we would do today, come back and talk a little bit more specifically about what kind of AI policy companies need to have. So it is the Jody and Justin show, again, talking about all things AI, but specifically AI policies.

Justin Daniels 2:21

And for our listeners out there, I would be happy to talk to you about the development of my ChatGPT tool that will interact with my wife instead of me.

Jodi Daniels 2:30

Huh, I can’t wait. Well, darling, shall we get started?

Justin Daniels 2:36

Ah, yes, we should. So let’s talk about point number one for a good AI policy. And that is, what dataset are you using? And how have you received the legal right to use that dataset. So the idea here, as you think through this is, as you’ve heard Jodi say many times, it’s garbage in garbage out. So the first place to look is the AI is going to train itself on the data set that you provide it. And we know that we have a litany of sectoral and state privacy laws, among other things that you need to think about when you are collecting this data to make sure that you aren’t violating them. One of the things I have been talking to clients about is maybe the place to start is with internal datasets to your company, because those are the kinds of datasets that are either your data or contractually, you have agreement to be able to use that for internal business purposes, because the wider net that you start to cast brings into play. Did the data set that you’re using for the artificial intelligence algorithm? Comply with relevant privacy laws? Could it be in breach of a GDPR or HIPAA or a litany of the other laws not too dissimilar from a lot of the work that Red Clover does with its clients to ensure privacy compliance?

Jodi Daniels 4:08

Oh, thank you. That was a fun little plug. Well, the other items that we think you need to include in an AI policy is what controls will you have in place to ensure that the data is accurate because these tools are not perfect. And the likelihood of you getting something that is wrong is very high. And so we need to be thinking about fairness and bias and ethical considerations, having specific guidelines that will help address this bias. What type of review needs to take place? Is there any kind of an AI compliance tool or review? I know a couple clients who are looking into this just that I know you have. You’ve mentioned clients who are looking into this and some software tools that are getting created and considered. So it’s all about making sure that the data that’s getting in the model, all that’s getting used, how are we making sure that the output will address the issues of discrimination or fairness and accuracy. So we want to think about that and put in place what that will look like in a policy.

Justin Daniels 5:15

So to emphasize Jodi’s point, think about a use case where you say, You know what, we’re a large organization, and we hire a lot of people every year. And wouldn’t it be great to use AI to help us sift through all kinds of resumes for that first pass, and then whoever AI says matches really well with the job requirements, then there’ll be passed along to someone who might actually do the interview? Well, that should automatically get your attention because of what Jodi said. Meaning if you’re going to use an AI tool to ingest all these resumes and try to meet up who sounds the best on paper with your job responsibilities, you could get into issues under the law with discrimination under federal law under some state laws. And so you really want to think through on the front end, what strategies or mitigation Are you putting in place to make sure that using AI to expedite how you scan and hire people, doesn’t end up on the back end, giving you a lovely discrimination lawsuit?

Jodi Daniels 6:19

Well, thank you. But as number three, Mr. Justin,

Justin Daniels 6:23

Will you have defined business use cases? So what Jodi and I mean by that is, for example, I might use ChatGPT, or some kind of AI to do some research on a topic. So that is pretty well defined. However, what if you’re going to use ChatGPT? As we saw earlier this month to write a screenplay so that you don’t have to hire screenwriters in Hollywood, that’s getting a lot of press? Because they’re saying, hey, how do you know that’s not violating some copyright or whatnot that we talked to before, but that type of use case has real interesting implications. So how would you go about writing that type of content, which could displace other people’s jobs? It could bring you into the firing line of saying, Hey, you just read a screenplay? That sounds a lot like what I did. Oh, we’re gonna send you a cease and desist letter. So you really have to think through on the front end? What are some of the ramifications depending upon how broad or how narrow your use case might be?

Jodi Daniels 7:30

I would add to that think about the use cases in the organization and are there some that people can use on their own or are there others that they might be able to use if there’s some type of a review that takes place, and AI policies are really intended to be able to try and be a little bit more self serve, but at the same time, you might have some that are really easy for people to be able to use, because maybe it’s simple content, and others might be a little bit more complex. And you want to be able to have the approval process built into what that policy would look like. The once you have those business cases, the other part that you want to think about is not all data is the same. And some of those use cases are not always the same. If you think about the kind of data that you’re willing to put into the tool, some you might be more comfortable with, for example, is it a general question that you’re asking or kind of give me here’s, here’s 10 themes that I’m thinking about, helped me create a presentation on that. That’s very different than putting in client information or customer information or employee information. We have some companies that are mandating no client or company or customer information could go in. Now think about if you have adult or minor information are you may be okay with certain information going. So maybe it’s just in a name, maybe Are you okay with names, but you’re not okay with date of birth, you really want to be able to think through that type of data elements that you have, and be able to set up again, as much as you’re comfortable with, and setting those parameters versus other situations that would require an additional review. But the more that you can identify up front, these are no goes. And the more clear that you can make it the more successful your policy will be.

Justin Daniels 9:20

I think a great example to reinforce Jodi’s point is the news report that Samsung had an issue with proprietary code, what the employee did was they were like, hey, I want to debug this code and ChatGPT or other AI can help me do this. So what do they do? They upload the proprietary code to facilitate debugging it Well, the problem is, when they use that particular API, it’s Terms of Use said, hey, when you upload this stuff, we’re allowed to use it and learn on it. And that was something that was intended to be proprietary and not disclose. So that would be an example of a use case where somebody was thinking AI would be helpful to them but didn’t realize the bra otter implication of once you upload proprietary code, it’s not so proprietary. And

Jodi Daniels 10:07

Justin, I think that brings up an interesting point of kind of the approval as well of what type of tool is okay to use and what is not okay to use. The policy could also include this type of AI tool is approved, this type of AI tool needs additional review from our vendor team, or however vendors and software is approved in your organization.

Justin Daniels 10:29

I think you make a good point there. Because now if I’m representing, say, a large company whose vendors may start to use AI, I’m going to start thinking about putting requirements both in my contracts and my due diligence list to say, Hey, is your report is your work product using AI? And if so, what is that AI? Have you gotten the necessary compliance? And so I think, to your point, you’re going to start to see this make its way into contracts. Because you’re going to want a level of attribution, you’re going to want to know that the report or intellectual property you got has all the requisite legal meets all the requisite legal requirements so that you own what you expect to own. So one of the things that comes up is, in your policy, what types of things can you do that will ultimately require some level of legal review. So as I said earlier, the idea behind discrimination in the example of using resumes that goes through legal or an example where hey, we might want to create content that we’re putting into a deliverable for our customer, maybe that content has to go through legal review to make sure that you’re not violating intellectual property rights. So it is very important to be thinking through how legal is going to be involved, because AI brings up so many novel issues across a variety of legal disciplines, including privacy, security, employment, law, intellectual property. So it’s it’s a process.

Jodi Daniels 12:06

Indeed, Justin, what is our final tip for today of what should go in an AI policy?

Justin Daniels 12:13

What is the final tip for today?

Jodi Daniels 12:15

I know are today. Yes, sir. Our final tip for today. Remember, when we prepared, we came up with tips, we had so many that we had to narrow it down? You are given the final one?

Justin Daniels 12:27

Yes. You were quite vocal in how that process was going to happen. Last? Yes, you always are

Jodi Daniels 12:31

99.

Justin Daniels 12:32

So the question becomes, what kind of content needs a legal review?

Jodi Daniels 12:41

Anyone share a little bit more about that? You’re always so storytelling like,

Justin Daniels 12:45

Well, it’s funny you say that, because I’m preparing for a presentation. That’s Ted style. And it’s all about the storytelling. And I feel like the best way to get people to get their head around AI is to tell stories around how this stuff can happen. So content that needs a legal review, I would think anything that is going to be really customer facing. So if you’re creating reports, or other things that you are telling the customer, hey, once we give this to you, you’re going to own it and all the intellectual property related to it. That should probably pass through legal review. If you’re contemplating any type of software development, I’m now seeing clients who, hey, ChatGPT is a great software developer, we can do that. There again, that probably has to pass through legal review, because if it’s based on open source or other things, you may not have the rights to provide that to a customer in the way that they wish to commercialize it according to a software development agreement.

Jodi Daniels 13:47

And I think any policy and AI policy included is only as good as the training that the person gets, once you’ve created your beautiful policy. And off it goes. You will want to make sure that it is well communicated and people really understand it not just the checkbox that says yes, I technically read it. But I wanted to sign off so I can move on to the next email that I have. And said you really have to think about well video, will video explanation from an executive in your company help with this? Is it bullet points in an email is it all of those things, but think about the culture in your company, and how our policy is best adopted? Because here you have both privacy and security risks of personal information and also company confidential information. AI is here to stay, it’s not going anywhere and telling people they can’t use it probably not the best option. Instead, we need to work with the software and figure out how to make best use of it for your company. That was well said, Oh, that was fun. That doesn’t always happen very often. Well, we hope that you like these tips. I’m sure there are other items that you might have. We couldn’t talk all day long about all the things we’d like an AI policies but some short snippets that we think definitely should make it in and we would love to hear from you. So please Follow us on LinkedIn. You can visit redcloveradvisors.com For more information and don’t forget to subscribe to our podcast. You can make sure you get the weekly email alerts or visit youtube or your favorite podcasting platform. We are likely there.

Outro 15:18

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.