ChatGPT and AI: Crucial Considerations for Businesses

Jodi DanielsJodi Daniels is the Founder and CEO of Red Clover Advisors, a boutique data privacy consultancy and one of the few certified Women’s Business Enterprises focused solely on privacy. Since its launch, Red Clover Advisors has helped hundreds of companies create privacy programs, achieve GDPR, CCPA, and US privacy law compliance, and establish a secure online data strategy that their customers can count on.

Jodi is a Certified Informational Privacy Professional (CIPP/US) with over 20 years of experience helping businesses — from solopreneurs to multinational companies — in privacy, marketing, strategy, and finance roles. She has worked with numerous companies throughout her corporate career, including Deloitte, The Home Depot, Cox Enterprises, Bank of America, and many more. Jodi is also a national keynote speaker, a member of the Forbes Business Council, and the co-host of the She Said Privacy/He Said Security podcast.

Justin DanielsJustin Daniels is a cybersecurity subject matter expert and business attorney who helps his clients implement strategies to better manage and recover from data breaches. As outsourced general counsel for Baker Donelson, Justin advises executives on how to successfully navigate cyber business and legal concerns related to operations, M&A, incident response, and more.

In 2017, Justin founded and led the inaugural Atlanta Cyber Week, where multiple organizations held events that attracted more than 1,000 attendees. Justin is also a TEDx and keynote speaker and the co-host of the She Said Privacy/He Said Security podcast with his wife, Jodi.

Available_Black copy
Tunein
Available_Black copy
partner-share-lg
partner-share-lg
partner-share-lg
partner-share-lg
partner-share-lg

Here’s a glimpse of what you’ll learn:

  • What can companies learn from Samsung’s ChatGPT data leak?
  • How businesses can protect personal information when using ChatGPT
  • The importance of ChatGPT regulations in the absence of federal privacy law
  • Justin Daniels’ tips for analyzing and mitigating source data bias
  • Jodi Daniels addresses the current and future state of AI ethics

In this episode…

ChatGPT is an international sensation, with businesses utilizing it for content creation, debugging, translation, and writing code. But this AI tool is still unregulated, raising privacy and security concerns regarding data input. Since ChatGPT is easily accessible to the public, what should you consider before implementing it, and how can you mitigate the associated risks?

When adopting ChatGPT for your company, Certified Privacy Professional Jodi Daniels says you should evaluate the tool with due diligence on potential use cases. For instance, a marketing department may want to acquire consumer insights involving personal information. Developing a policy to assess data types and functions, educating employees about risks, and regulating information sharing eliminates bias and privacy infringements.

On this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels share their thoughts on ChatGPT’s privacy and security implications. Together, they address the current and future state of AI ethics, the importance of ChatGPT regulations in the absence of federal privacy law, and how businesses can protect sensitive data when employing ChatGPT.

Resources Mentioned in this episode

Sponsor for this episode…

This episode is brought to you by Red Clover Advisors.

Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.

Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, ecommerce, professional services, and digital media.

To learn more, and to check out their Wall Street Journal best selling book, Data Reimagined: Building Trust One Bite At a Time, visit www.redcloveradvisors.com.

Episode Transcript

Intro  0:01  

Welcome to the She Said Privacy/He Said Security Podcast. Like any good marriage we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Jodi Daniels  0:22  

Hi, Jodi Daniels here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional, providing practical privacy advice to overwhelmed companies.

Justin Daniels  0:36  

Hello, Justin Daniels here, I do not work for Red Clover Advisors. I am an equity partner at the law firm Baker Donelson, I am passionate about helping companies solve complex cyber and privacy challenges during the lifecycle of their business. I am the cyber quarterback helping clients design and implement cyber plans as well as help them manage and recover from data breaches.

Jodi Daniels  1:01  

And this episode is brought to you by out with the terrible drum symbol Red Clover Advisors. We help companies to comply with data privacy laws and established customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology, ecommerce, professional services, and digital media. In short, we use data privacy to transform the way companies do business. Together, we’re creating a future where there’s greater trust between companies and consumers. To learn more, and to check out our new best selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. So today, we don’t have a fancy guest.

Justin Daniels  1:46  

But we have your fancy Red Clover shirt

Jodi Daniels  1:48  

where it can it’s true. We have fancy Red Clover, sir,

Justin Daniels  1:51  

yes, where can listeners go and get their Red Clover swag that might have a privacy message.

Jodi Daniels  1:57  

Ah, they can, they can email me, you can email me at info@RedCloveradvisors.com. And we’ll hook you up with a shirt if you really want a red clover shirt. Today, we’re going back to our roots and how our podcast got started with the Jodi and Justin show. Because we’ve had a lot of conversations. And we’ve been working with so many different companies on AI many times on ChatGPT. And we thought our listeners are probably interested in the same thing. I just came back from the privacy conference and the sessions that were on AI were standing room only that we thought we would have conversation today on AI.

Justin Daniels  2:38  

It’s not standing room only here. Well,

Jodi Daniels  2:41  

it could be many people can listen because it’s endless. Because there’s no finite based on our podcast, they can listen to it while they’re walking their dog. Okay. So Justin, you’re gonna kick us off?

Justin Daniels  2:59  

I certainly can. And I think we’ll start in the area that’s near and dear to Jodi’s heart, which is talking a little bit about your thoughts about how companies should protect confidential and personal information of the company in your AI requests? And how do you do that? I think I can report that Samsung has already been in the news for source code relating to it, that employees put on the ChatGPT to debug. And obviously that is a bit of a problem. Because all the information you post to ChatGPT ChatGPT It wants to learn from So Jodi, what are your thoughts? Well,

Jodi Daniels  3:38  

I’m gonna actually pass it back to you because you shared such an interesting story, you can’t just leave us dangling. So tell us a little bit more about the Samsung situation and what we can learn from that.

Justin Daniels  3:50  

In essence, what happened was, is they got some code for a product that they’re working on and an employee, put the code on the ChatGPT, to say, Hey, ChatGPT, can you help me debug the code? Okay, but the problem with that is, is if you put your source code onto ChatGPT, it’s going to turn around and use that code to help it learn. And if someone can get into your chat query box, which is not hard to do, they can see that oh, this is proprietary code of a particular company. So that’s what happened there. And I thought I would bring it up because I think it brings to the forefront some of the issues that companies need to be considering when they use ChatGPT. Because even if a company doesn’t have a policy around it, or you go the other way, and you completely ban employees access, Who’s to say they won’t do it on their own computer, and on their own time to help their own efficiency?

Jodi Daniels  4:53  

So Justin, what are some of the things that companies could do now so that they are not the next company in the news? Well,

Justin Daniels  5:03  

I think if I’m a company, I first need to figure out if I want to use this. Okay? And if so how do I want to use it and my view of it is, is I’d find some pretty low level internal function that you could use a pilot on, meaning if something went wrong, it’s not that big a deal. Maybe you create your own phishing simulation for your own company, or write a phishing email or something pretty straightforward. Because I think companies need to learn and use this in the wild. And I would be real hesitant right now before I would take ChatGPT output and use it for a customer because as we can talk about there are issues around did the information that ChatGPT ingested to get to your output? Did it infringe someone else? Intellectual property rights? Did they get to the output because they were using information that was actually personal information that might be regulated under the California Consumer Privacy Act? That requires consent to get? And so a lot of companies I think that are waiting into this talking about all the efficiencies aren’t thinking through some of these really interesting privacy and security issues. And now you’re seeing Microsoft incorporated into being? So now if I’m a company, I may need to worry about some of the data I might be getting from vendors, if they’re using ChatGPT, that it may not be correct, it could create an infringement problem for me. There’s a lot to unpack here.

Jodi Daniels  6:33  

Well, there is and you did ask me a question. And I sent you like 400 questions back? So I’m going to actually answer your question now, which was what you know, companies need to be thinking about from a privacy point of view. And you actually hinted at it, where the very first piece is what kind of data in which function is in scope here, because not all data is created equal, we could have a marketing team potentially want to not enter any personal information. Maybe they’re just looking for content ideas, that might be a great place to start, you pointed out the challenge there is we just have to be mindful of it’s a big challenge. But the copyright and IP situation and how, how can the organization use what it gets as the export, at the same time, that same marketing department, it might want to take its file, share it into an AI tool to be able to better understand who its customer bases are, what the patterns are? Well, now we start sharing personal information. And I know we’re going to talk about this, but I feel really strongly and I think people are forgetting that they need to consider any of these AI tools, whether it’s ChatGPT, or any of the other ones that are out there, just like any other vendor. So just like you might go and evaluate a new payroll provider, or a new agency or a new software tool, you want to do the same thing here with an AI tool, and go and put it through the entire vendor sequence that you would ordinarily do.

Justin Daniels  8:05  

And I think it means that it’s going to impact your due diligence process, because now if I’m going to interview vendor A, and they’re going to be providing me software as a service product, for example, maybe it’s a privacy product, but they’re using AI, I may want to start asking, Does your to incorporate AI? Does your tool that incorporates AI? Where’s the data set coming from? Where do you have the legal right to actually collect and use that data? So even if you’re not seeing the AI directly, as part of your vendor management process, if companies are going to start to use it, I think you’re going to have to include in your questions to your vendor. Things specific about AI to know if they are using it? And if so, where are they getting their data set? Because I think Jodi to your issue of the privacy concerns, it really goes to what data is the AI ingesting to come up with its outputs? Like you’re seeing the debate now where Elon Musk and several others wrote a letter saying we need to put a moratorium on AI when he became head of Tesla he put a kibosh on using a bunch of data collection from Tesla, not Tesla, I’m sorry, from Twitter. And so I think it brings up the issue from a privacy professional standpoint, where are they collecting these datasets? And how are they getting the legal right to do it? Because I betcha god, did they talk a little bit about what happened in Italy and GDPR this week?

Jodi Daniels  9:33  

It did. It was a topic of conversation. And when I was in the session covering all things you want to know about Canada and why Canada matters. It was one of my favorite sessions. The Canadian Privacy Commissioner actually had shared that they too were launching an investigation and just that same day, they did as well. So it’s certainly something that companies need to be thinking about. And I like your idea of kind of exploring and test cases and really Making sure that there’s also a policy, I’m starting to hear a lot of conversations of, what should we do? Should we just ban it? Should we block it? Or do we have? Or do we educate. And most privacy and security professionals, I think, agree that banning and blocking isn’t really the way to go. Employees are creative, they’ll just start to find other ways of how to do that. And instead it, it doesn’t really help train and educate, which is what we’re looking to do. Because whether it’s this current tool, or a future tool and technology that’s coming down, what we want is to help employees understand the risks, and either learn how to evaluate them on their own or know who to ask to raise the question to, I think having a policy is going to be a really great place to say, here’s the kind of data that can be shared without approval, here’s the kind of data that needs this type of approval. These are the types of tools this is the process that has to go through from a vendor standpoint, there’s a variety of other pieces that should be involved. But I think a policy is going to be a really great place to start at the moment.

Justin Daniels  11:02  

I think that’s true, and figuring out what you’re going to put in it, what your use case is going to be. And the other thing I think it’s important to mention is, I admit, I splurged the 20 bucks, so I got the commercial version. And I’ve been using it a little bit, but I’ve been back checking whatever answers I get, because I’ll just do some research to see what it has to say. And I think it’s really important that in my assessment, I don’t think ChatGPT is ready for primetime, meaning you’re relying on it to then make a deliverable for a customer or someone else because of the issues around the datasets having bias, the issues around potential IP infringement, I think you really need to be careful and start small. I know entrepreneurs out there will probably start bigger, but understand I’ve been talking to enterprise type of customers with security and privacy teams. And if you’re using AI, and you’re going through their vendor project management, they’re going to be asking some very tough questions, but I kind of want to take us in a bit of a different direction, because this Italy thing brings up an interesting issue. And for me, it’s this the whole basis for why Italy took issue with ChatGPT was based on what the general data protection regulation. And as you know, I don’t see any US federal privacy law getting passed in the near future. And I’m just wondering, Jodi, from your perspective, don’t you think some of the legislative inaction we’ve had in Congress versus visa vie privacy and security are really going to come home to roost when we’re talking about artificial intelligence, because there’s really nothing out there to regulate it directly.

Jodi Daniels  12:42  

That is what makes being a consumer really scary. And that we don’t have any of this regulation, I think we could honestly have an entire episode on just what we should be thinking about with the outputs of these different generative AI tools. The idea from Italy, where there has to be a basis to one be able to collect the information. And that’s one of the problems is you have an organization that just went and collected everything that it could see, without necessarily the ability to be able to do so. And the other piece is that if Jodi’s information was there, what rights do I really have? And can’t How can I go and get that information? Or correct it? Or if it’s spewing incorrect information? How can I actually correct it? And it’s kind of already out there. And here in the US, we don’t have those types of laws and even the state laws that we have, they are putting some principles around AI, we’re not there yet, which means it’s incumbent on companies to regulate for us, as humans as individuals. And I think this conversation with policy evaluation, thinking about ethics. And what type of ethics policies or consideration needs to take place? What type of data is being used? Is it sensitive data? What kind of decision making is actually taking place? And one of the questions Justin, I wanted to ask you is talking about source data? If I have all this information, I’m putting it into a big pot, I have a machine that’s trying to determine how I might use that data or make some evaluations for me. How can a company understand the answers? How can it try and reduce the bias that’s in there? And what’s important to consider with source data?

Justin Daniels  14:28  

So I think, where we start to unpack your question is, I think you’re going to see AI evolutions where let’s say, for example, Jodi, I’ll use your company as an example. Maybe you decide to use AI to ingest all of the red clover knowledge into doing a privacy policy, a data inventory, and it starts to learn kind of like the different industries or whatnot, but at least they’re you are ingesting information. That’s a smaller subset that is your information. Shouldn’t you have some control over it? And that might be a way that companies look to deal with this because remember, the wider the wider gap that you put out there to or vacuum, I’ll say to collect information, then you bring in the issues of bias, because we all know there’s a lot of information on on the internet, that’s either misleading or just outright false. So the more wider net you cast, you’re bringing that into play. So the one of the things I’m thinking about is okay, well, maybe with the policy, what kind of certifications? Are we putting around the data? What kind of requirements are we putting around the data that the AI is getting to learn from? And then the second part of that is, in the due diligence process? How does the company explain how their AI actually works? ChatGPT ToS is using this large language model. It’s a model, not the only model. And that’s where I think regulation comes into play about having requirements or standards much like for a car, and how it works, and having certain standards around the safety belts and whatnot, I think we’re going to need the same thing around AI, or it’s likely to run amok and I guess Jodi, I want to throw back a question your way, which is, given what you’ve seen over different industries and the evolution of privacy laws. And what I’ve seen is, what is your level of comfort that companies are going to turn around all the sudden and get it right when it comes to AI ethics?

Jodi Daniels  16:29  

Hey, we have a long way to go. If you asked me that question, right this moment, I think most companies would not get it right. I do think there are some organizations who do have ethics principles and committees in place, and are really trying to tackle the opportunity of what AI offers in a smart and methodical way. The rest of everybody else who’s trying to play really needs to realize that the data that’s being used in the model, and the decisions that will come out of the model can have significant consequences could be favorable, if the right data is put into it, and could be really disastrous from a privacy perspective, if there isn’t thought and sort of testing. Also, you know, if you put all this data in some of the ways to be able to determine what the bias is, like, as tested, you know, put additional information in and see how is it going to be answering time and time again.

Justin Daniels  17:26  

Because to bolster on to the privacy issue? Now, think of it from a security standpoint, what if I’m the threat actor, and I can get into the source data the AI is using to make decisions, and I put crappy data in it such that the AI makes a bad decision that gets relied on like, let’s say, use artificial intelligence for flight avionics, or, or health care. I’m already seeing instances where I’ve read about where AI is sitting along a doctor. And maybe it’s a hybrid approach where AI is helping to figure out certain things, but then the experience and wisdom of a doctor, the collective is better. But remember, any new technology, AI pick whatever your flavor of the month is, it comes with a handmaiden. And that’s new cyber risk. And the same issue you have, with other technology with cyber you’re going to have with AI and I’m just concerned that, you know, we both feel that privacy and security continue to be an afterthought. And for AI, it really needs to be a core design feature and the initial reaction with Italy and some of the other stuff — Jodi, are you seeing cyber and privacy being a core design feature of AI?

Jodi Daniels  18:34  

Again, I’m gonna go with a lot of companies are not including that they’re rushing to great, cool technology, I’m going to test it out. And the same would be true for the NAACP privacy and security issues. From an IP perspective, many organizations are just going and using it and taking whatever comes out without vetting it. There are some organizations, the larger organizations who really are thinking and trying to put in a methodical approach. And I think the smaller organizations in this situation can really learn. And I think everyone who’s listening here, cares about privacy and security. And so take it back to your clients and your organization to be able to help make sure that they’re thinking about whichever tool is going to be that they’re putting them through the right vendor process, how they would ever approach any type of new project that they’re using small subsets of data, and that there’s a plan in place also to train and educate employees.

Justin Daniels  19:29  

I think that’s a great way to summarize what we’ve done, I think you can expect from our podcast that we’ll get a few AI experts to come on the show and talk about this more because there’s just a lot to unpack and more to come.

Jodi Daniels  19:43  

If you’d like to follow Jodi or Justin, come check us out on LinkedIn are always posting all kinds of fun, interesting and sometimes entertaining topics. Thank you so much for listening, and have a great day.

Outro  20:01  

Thanks for listening to the She Said Privacy/He Said Security podcast if you haven’t already be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.