Click for Full Transcript

Intro  0:01  

Welcome to She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.

 

Jodi Daniels  0:21  

Hi, Jodi. Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.

 

Justin Daniels  0:35  

Hi, I am Justin Daniels, I am a shareholder and corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology, since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk, and when needed, I lead the legal cyber data breach response brigade.

 

Jodi Daniels  0:56  

And this episode is brought to you by — that was really wimpy. Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com.

 

Justin Daniels  1:33  

Yes, nothing. You’re very smirky today.

 

Jodi Daniels  1:34  

Okay, you’re like giddy and smirky all in one.

 

Justin Daniels  1:37  

But you’re the one who’s drank all the coffee today?

 

Jodi Daniels  1:41  

No, well, we actually sometimes batch this, and so we’ve discovered that I actually only drink a regular cup of coffee, and it’s half cup. I’m just a happy person, okay.

 

Justin Daniels  1:54  

All right, can we talk about today’s episode?

 

Jodi Daniels  1:56  

Yes, that’s your job. All right, maybe you need some coffee, which you never had, ever.

 

Justin Daniels  2:02  

Well, today is going to be a really interesting episode, because we’re going to take a bit of a different tack about privacy and security and AI, because we’re going to talk a little bit about how human training is AI’s greatest weakness and strength. And today I’m pleased to bring to our program. We have Nick Oldham, who is the Chief Operations Officer and Global Chief Risk and Compliance Officer for Equifax. He is a forward thinking legal and operations executive, and he has proven a track record of driving large scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise wide second line functions, leading initiatives to embed AI, enable data driven decision making and deliver innovative compliance solutions across a 1.9 billion with a B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long term strategic goals. Hello, Nick, welcome to our

 

Nick Oldham  3:04  

Hello. Happy to be here.

 

Jodi Daniels  3:07  

Well, Nick, we always start every episode to understand a little bit about your career journey. So help us understand more about Nick.

 

Nick Oldham  3:15  

Yeah, so I am a lawyer by training, which is probably obvious in the bio and now is largely an operational and privacy leader. While there’s been a long journey, I think the commonality is early on in my career, I became interested in how data is used. As a prosecutor, I used it to build cases. As a compliance professional, I used it to drive insights and operational efficiencies, and now as an operations executive, I use the same type of data use mindset that includes privacy. How do we use it responsibly in order to propel my business forward? So I would boil it down to really interested in how data is used.

 

Jodi Daniels  3:55  

Data, data, data, data, data. I started my career.

 

Nick Oldham  3:59  

I just hope I kept that in 90 seconds.

 

Jodi Daniels  4:00  

You did an amazing job. I mean, I started my career doing financial data. Now I’m just in personal data, so it is interesting compliance and how people, how people’s roles evolve.

 

Nick Oldham  4:12  

And I wanted to be a tax lawyer, and here I am.

 

Justin Daniels  4:14  

Yes, but I think the interesting part of what Nick is saying is similar to me, is he’s worked in different areas that may touch data, and it’s also an opportunity to bring things you learned in different areas and that expertise to this area where you may have to find creative solutions. And you’re able to do that because you’re able to pull on these different areas of expertise that you’ve gathered over your career. I imagine is very helpful. Yes, it’s been helpful to me. So Nick, let’s jump right in and say, talk to us a little bit about why you think AI literacy for people is just so important. And what do you mean by AI literacy?

 

Nick Oldham  4:55  

Yeah, so AI right now, AI is such a buzzword, and you. You hear it in every conversation that you have in the business. At home, I got four kids, and all of them talk about AI every day at work, it’s all about AI. And I think there’s still a misconception of what AI can do for you and what are the challenges with AI. And so when I think about AI literacy, what it means to me is twofold. Number one is an understanding for everybody, a commonality of an understanding of what the technology is. And I think this is critically important for one empowerment like it’s hard to make decisions about what you’re going to do with technology without having some basic understanding of it, and number two is all these conversations reflect the fact that this is today’s main societal debate about how we interact with each other, and it’s hard to participate in that day debate and drive forward responsible, ethical, whatever word you want to use, use of data in the AI context without having an understanding of it. So that’s one aspect of the literacy to me. The other aspect of it is I want to use technology to do something better, and that is a constant journey of learning. And for folks of my generation, I grew up evolving with the PC revolution, and then cybersecurity was a buzzword and so on. AI is rapidly developing so fast that I want to have a baseline of literacy as I’m doing my own learning as I’m growing within the AI space.

 

Jodi Daniels  6:31  

That makes a lot of sense. I’m curious. You mentioned that people’s understanding can also vary a little bit. Can you share an example?

 

Nick Oldham  6:42  

Absolutely, so I think of the difference between, in a look at this as like a business context, the difference between machine learning, which could be classified as AI, where you’re gathering some understanding, some extraction and categorization of data, and maybe some insights with it, with something like generative AI, where it’s providing you’re using the most powerful tool you have, your language, to ask a question and get an answer that may or may not be right in there. And I think when I hear in conversations, you know, and as the population broadens in this conversation, the conflation of what’s the risks and challenges with machine learning versus what’s the risk and challenges with a generative AI, or what’s the risk and challenges if we get to genetic AI, where we think about replacing human tasks with a virtual assistant, and so that literacy helps have those dialogs and in a privacy context or a guardrails context allows you to have a thoughtful discussion about what’s appropriate guardrails, because they may be different in each of those contexts.

 

Jodi Daniels  7:47  

I really appreciate those explanations, and I love that you brought up privacy, which is right, my favorite part. Why is the privacy component so important in training and understanding?

 

Nick Oldham  8:05  

Oh, well, that’s — to me, you know, I kind of laugh. I said this at the beginning, that my career journey is really boiled down to interest in data use. And AI is all about training something to do something, training a machine to do some activity. And oftentimes what that training is is data, and that data, it could be personal information, it could be other data that’s out there. Privacy is one of those key components. If I just think about how do I want to use AI in a corporate context, in a personal context, whatever it is, I have to have some awareness of privacy. Otherwise, I feel like it’s one of those situations where it goes into the old black hole of digitization. And not to detour too much, there was a dialog in my industry back in the 1960s about digitization of the credit file, and that was, you know, led to congressional hearings. It was a lot of angst, how can you compile all that data into a repository before something like a credit report was just a paper file. And then we see the internet age, where you just get the proliferation of data. Now, with AI, you get the proliferation of data and maybe not a lot of transparency on how it’s used, especially if it’s a black box model. In order to be informed, to make the right decisions, to feel empowered as a human, I need to be aware of privacy with the machines, and so I feel like privacy is one of the two or three main topics that have to be understood and discussed as we think about the AI revolution that we’re in right now.

 

Jodi Daniels  9:33  

That is just music to my ears, my privacy. Heart is so happy fluttering. It is fluttering.

 

Justin Daniels  9:42  

So kind of building on what you were saying, Nick, can we talk a little bit about some specific examples of how you think about including privacy in the context of AI literacy and in training? I think you alluded to it because the first step in building an AI generative type model is. Hmm, you have to train on a whole bunch of data.

 

Nick Oldham  10:03  

Yeah. So this is, I’m gonna wax and wane just a little bit here, and this goes off conversation you have, you and I have had before. You know, I believe that the greatest strength and the greatest weakness in the AI revolution is the human aspect. And cornerstone to that is privacy. So when I think about human, you use human in the loop, or whatever buzz phrase I think human in loop is what we’re kind of standardizing around in the AI is only as effective as the humans that are training it, correcting it and leveraging it. And I go back to that, that discussion we just had a second ago about why is the privacy piece of this important? I think of an easy example. I did this recently on ChatGPT. I said, create a picture, and I gave it two things that I wanted. One was a space theme, and one was a physical car type theme, and it creates this great picture. I did thought, Well, can I do this with my family, like I’d like to look at what my kids are going to look like in 10 years? And I uploaded the picture, and then I paused because I realized, well, wait a second, I don’t know what this actual tool is going to use with this data. I’m using an open source AI machine. And so I just queried the machine or the AI tool, and said, Hey, what are you going to do with this picture when I upload it? And it’s funny, it gives me an output that says, well, we used it to train models, and it’s possible that this could get Comm, mingled camera the exact words, but commingled with other data. And as a privacy practitioner, I immediately thought, Oh, my goodness, I’m about to send my child, and I could really age my child’s picture out into the wild that could show up anywhere that somebody is using this tool, and then I thought better of it. That is an awareness aspect of privacy, and I mean that in the capital piece sense, not the legal regulatory sense, but the idea that there’s attributes about me, there’s attributes about my family that are personal to us. And in order for us to be effective with AI, we have to be aware and have empowerment and control over that data. And so I I feel like, as we think about the humans in the loop, they’re the critical gatekeepers, and therefore they have to have a critical understanding of privacy as they make decisions about training and implementing in using AI.

 

Justin Daniels  12:17  

Did it also refer you to OpenAI’s privacy policy? Because when I did something similar, they were like, You should take a look at our privacy policy. And then I was like, Well, tell me a summary of what it says.

 

Nick Oldham  12:29  

Well, you know, one of my favorite things to do now is, tells you how bad it is with AI, like, I use it every day. I use my AI tooling to do my weekly meal plan. I do it to make my vacation plan. I also do it to cheat on reading privacy policies. Tell me the five terms that are important in a privacy policy under 50 words. Can you compare it to other privacy policies? Or I say, you know, I’m comfortable I read in detail Company A’s privacy policy compared to that privacy policy. And tell me the differences with company a in there, and instead of like, I think the criticism, you know, long ago in my privacy practice, when I was a lawyer, we used to get paid to write people’s privacy policies, very expensive writing of this lengthy document that’s really hard to capture the legal requirements with common understanding AI has changed the game for me, because now in 100 words or 50 words or less, I can just get the key points to it so it didn’t refer to me to the privacy policy. But as a privacy practitioner, I’m aware that I should look at it and I just use the tool to tell me what I should focus on.

 

Jodi Daniels  13:35  

Well, knowing how important privacy is to an AI governance program, how do you recommend, especially large enterprises, make those connections?

 

Nick Oldham  13:46  

Yeah, so I think when I think about AI and the risks, the human piece, that’s always been a big part of my focus, what I am worried about, and this is going to answer that governance question. What I’m worried about is adoption is so fast that we forget that the human piece of this is an infrastructure component to AI. So as I think about building the program governance at a large enterprise, we have to make sure that the humans that are implementing and running the tool, they’re overseers, the humans in the loop are adequately trained, so privacy and security are both critical components to that from an awareness perspective, but I think more fundamentally is it’s there’s a call to action. Is for companies. It’s for practitioners, as we think about the next generation coming up, if I just think about lawyers, and when I came up through law school and started practicing law, there are armies of associates who learned at the feet of proverbial masters. Some of them may have not quite been masters, but we thought they were masters at the time we were trained in an apprenticeship concept. Today, most of that work is commoditized, like the key use of AI is something that takes a human a long time to do. A short time to validate that’s a that’s a frame that you hear throughout the space that, frankly, is what young lawyers and many young people as they enter their career. That’s what they do. And so we have to change the way we educate from the high school, maybe even before, all the way to the workforce, and have an apprenticeship type program where we’re leveraging training and critical thinking, combined with awareness of key domains, privacy, security, how to spot things that are incorrect red flags. You know, anybody who gets an email today that sees the M dash your first red flag as well? Somebody wrote that with AI. So the smart people go in and they change with commas. I found when I do that, my tool starts writing it with commas, and then we’ll have to do something else. But how do you spot the fakes? That sort of training program, which includes privacy as a core component, I think has to start early. And I think for companies, they have to think about how they transition the workforce to be aI native. And to me, that means an apprenticeship in human in the loops, working with the machines.

 

Justin Daniels  16:04  

So Nick, you brought something up that I know you and I have talked about, which is critical thinking. And one of the concerns we’ve talked about with AI, at least, that I have, is, if you’re not careful, people will let the AI substitute for their own judgment. And perversely, the proliferation of AI. AI may dull our critical thinking skills, which, for lawyers, is incredibly important, but for business people being to assess risk or assess the situation. You know, how do you think about how we need to be mindful of not allowing AI to replace our own critical thinking skills?

 

Nick Oldham  16:39  

So I think you got to inverse the paradigm. I think about this in an education component, many, many kids will read was that book, Fahrenheit 451, whatever that the the degree is that you burn paper and they learn like basic privacy, or 1984 the the entity state in the educational system, we are still stuck with the idea that we need people to create things from scratch and they turn it into a program like Turnitin or Grammarly or one of those that will check is this potential plagiarism by inversing the paradigm, which is, you use AI, everybody’s gonna use AI. That’s a productive use case. Your job now is to spot where the information is incorrect. That’s also how I would set up in a company for somebody to learn the right way to do something in an apprenticeship program, in that human in the loop that they are, they are forced to find out what’s wrong, as opposed to going the reverse direction. Why I used that book. Example is, we still need to educate them on parameters. Like, there’s an intuitive, natural reaction about privacy. That’s one of the things. Like, I think security is relatively intuitive. Privacy is very intuitive. There is that emotion, what do you mean you have my personal information? What do you mean you’re using it? We, I think in all of us who’s been in the privacy world knows that there is that natural reaction our curriculums in schools and our entry into the workforce has to come with training that’s more horizontal about these sort of guardrails and enable people to put words to these emotions, so it can help form that critical reasoning skills and then reverse that paradigm. And that’s the way that I see this working long term. Otherwise, we’re going otherwise, we’re going to end up to a place where all of the SMEs and all of the people who kind of have this North Star are built up over time, go away, and humans in the loop who are actually not adequate to oversee the AI systems.

 

Jodi Daniels  18:38  

In your opinion, what kind of investments or systems do you think are missing, as we’re thinking about AI literacy and training? So you just kind of talked about, right, the apprenticeship piece and the people component. What about the systems and tools? How do we bridge that gap?

 

Nick Oldham  18:57  

So I think the technology is there? There’s not a there’s not a system that’s missing. To me, it’s a process. It’s in a funny way, when I, when I look back at, you know, other major transformations in history, there is a adaptation. If we just think about the industrial age and people moving to cities in their scaling work, there’s an adaptation. How do people live? How do they get food? How was the family structure look like? We’re in the same place here. So what has to change is not the system so much is the processes and the engagement models that we have. That’s why I’m so focused on that inverting the paradigm in education. Because I think if our education system is built on a legacy model, call it a paper model, a legacy paper model. We are not educating the workforce of the future. Somebody is going to solve that problem. Whether it’s our society, it’s a different society, somebody’s going to solve that problem. I’m focused on that, that we don’t have the tools, not the system, but focused on the tools, the structure, the curriculum, to recognize the AI is going to replace a lot of. Are routine, manual tasks that humans are doing, and so we have to find an alternative way to educate that workforce. So it is really process based, curriculum based, that is the biggest thing missing, not the technology, to me.

 

Jodi Daniels  20:12  

If someone’s listening, in your opinion, what might be a starting point for someone who’s who agrees and thinks, gosh, yes, I really want to make sure that I’m helping our younger workforce, where might a place for them be to start?

 

Nick Oldham  20:27  

So I think it is three steps to me. One is, and this is going to sound funny and trite, but you go to your own GPT — ChatGPT, Gemini, whatever you want to do and say, How can I train human development? You’re going to find that many of those will come back with not as specific as maybe that I’m giving, but it’ll give a kind of a construct of how to think about this. It’s very, very hard for humans to think right to left. They always think left to right. So I see this in operations. I saw this in litigation that you look at your current state of how things are today, and you incrementally improve the visionaries in the world. And this is what we need right now. In AI has to think right to left, what’s the future state? And sometimes something like ChatGPT or Gemini can help you think right to left, because it gets rid of all of our human biases that we have how things are. So I would start that. Just get a framework. The second is start forcing yourself to think in the spot, the challenge spot, the problem mentality. Instead of the content creation. In a business world, I think of value creation a new product, a new service, something that you’re delivering. The value creation in AI is the ability to spot when the machine that’s doing something faster than you is actually wrong, so spotting the hallucination. So force yourself to train just like I don’t know. 25 years ago, when I became a lawyer in a trial practice, I was handed a book about how to it’s called envisioning information, how to present data to a jury in a graphic way that resonates today. The same thing would be as you enter the workforce, just start practicing how to spot things that are wrong, hallucinations. I did this recently with actually as ChatGPT, where I’m taking a family trip from where I live today to another city, and I was looking for activities for the kids, and I switched context between the cities, and the machine got all confused. So I got a bunch of lists of stuff in the city where I live. That was activities, dozen city where I was going to I was able to spot that because I didn’t recognize any of the locations. I had to google some of them. That’s just, that’s just ongoing learning. I no longer need to learn a book about how to write nice emails in the business community to grow, what I need to do is teach myself to be an issue spotter, something natural to lawyers as instead to an original content creator. That’s number two. And then number three. In my career, I’ve always wanted to be more engaged with mentoring, whatever it is. I think just one of those New Year’s resolutions for people in professional environments. I want to be a better mentor. I want to give back to the community. And then by like February, you realize got a lot of billable hours. I got a lot of projects. I got a lot of family stuff. It’s really hard. I think the most important place to engage for people today, if they have that motive and actually can execute, is engaging at the high school level, maybe middle school level, where we are actually explaining to students what the world is going to be like, because we see where it’s headed. So I think of my own children, many of them have learned to code from a very young age. That’s a vocational skill. Today, everybody knows that at a certain age, what they don’t have is horizontal thinking in there, and that’s part of that critical judgment, if you train them with something like privacy that can be emotional, like, how do you feel about your data? That’s critical thinking. That’s humanities. It’s not the vocational science type thing for us. In the practice we have to this is the time we actually need to give back to the education system so that that workforce of the future is actually hearing what it’s really like, and we need to value the skills that, over the years, like humanities, has been undervalued as we went to a much more tech based society, because we actually need to go back a little bit in order to make sure we have those critical thinking skills.

 

Jodi Daniels  24:15  

I think the critical thinking skills are so essential, and I really like how you laid out those three points. So thank you so much for sharing.

 

Justin Daniels  24:21  

Yes, you see it’s interesting because liberal arts educations are not as valued. And what Nick is saying, which I thought was interesting, is we need more of that type of thought process because some of these technical skills are being commoditized. But how do you think about and how do you feel about some of these issues as we try to figure out policy, what all this looks like in a big framework, I always thought. I didn’t really think about that until Nick brought it up when we had a conversation about it. So I’m glad you brought it up today. Yeah, so Nick, we like to ask all of our guests, based on all of your considerable experience, do you have a favorite. Privacy tip that you’d like to share with our listeners?

 

Nick Oldham  25:03  

Yeah, I alluded to this earlier. Actually, I think I stated directly, which, as I use AI to help me understand where I should be concerned about my privacy issues, right? It just it totally accelerates, and so at one point, like most people, I’ve got way too many streaming services. I don’t watch half of them, but I pay my monthly subscription. I was interested in understanding what actually, you know, I got a smart TV like everybody else. It listens to things. I search for certain things. I also have a bunch of people in my house to search for things. What does it do with the data? And it took, you know, my, my AI tool, I don’t know, a minute to search the web across all of the streaming services, they had to tell me the similarities and differences to the privacy policies. That’s something I would have never did before. I would have been concerned, but I never would have had the time, no matter how well meaning. So my favorite privacy tip is actually use AI to become more privacy informed.

 

Jodi Daniels  25:57  

I like it all right, when you are not doing anything related to privacy, security, AI or data. What do you like to do for fun?

 

Nick Oldham  26:06  

Yes, it’s terrible. I play a lot of video games,

 

Jodi Daniels  26:10  

So that’s terrible.

 

Nick Oldham  26:14  

Yeah, I mean, it’s — well, and I consume a lot of media, so I I have a busy life like you two do, and like many of the folks who may be listening to this, the ability to consume mass quantities of media has never changed, and I’m not sure where the time comes from, whether it’s video games. I like TV shows, I like movies. I like going to the movie theater. That’s one thing in the AI world I don’t want to actually leave. I want the movie theater to remain. I don’t know if that will be the case, but it is pop culture, it’s media, it’s video games, it’s those kind of things.

 

Jodi Daniels  26:46  

It’ll be really interesting, I think, to see what happens with movies, because so many movie theaters closed, kind of like how bookstores closed, and now we have a reversal where there are more physical bookstores opening. I think we might see a reversal in some of these situations, where people are going to want the human interaction in person, because so much will be digital. That’s my — that’s just dirty prediction.

 

Nick Oldham  27:09  

Well, I like the atmosphere of that coffee shop. I will say I have aI sometimes give me a you know, I want to, I want a funny story before I go to bed in 1,000 words or less and it I don’t need to go find a book in the library. I got ChatGPT to give me all of my written content that I need. I even had it write a rhyme for my 15-year-old.

 

Jodi Daniels  27:30  

Where I should get very good at rhymes and songs. But sometimes I’m excited that the bookstores are making a comeback, that that actually makes me happy. I think we need books and things like that. Well, Nick, we are so delighted that you came to join us today. If people would like to connect, where is a good place for them to do so?

 

Nick Oldham  27:46  

LinkedIn is always the best place, and it’s very easy. It’s just noldham, N, O, L, D, H, A, M, on LinkedIn.

 

Jodi Daniels  27:52  

Amazing. Well, Justin, any closing thoughts? No. Nick, thank you for joining us today. Awesome. Thank you again, Nick.

 

Nick Oldham  28:00  

Thanks for having me — take care.

 

Outro  28:05  

Thanks for listening to She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.