Click for Full Transcript

Intro 0:01

Welcome to the She Said Privacy/He Said Security Podcast. Like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Jodi Daniels 0:22

Hi, Jodi Daniels here. I’m the Founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant, and certified information privacy professional providing practical privacy advice to overwhelmed companies.

Justin Daniels 0:36

Hello, Justin Daniels here I am a corporate m&a and tech transaction partner at the law firm Baker Donelson, I am passionate about helping companies solve complex cyber and privacy challenges during the lifecycle of their business. I am the cyber quarterback helping clients design and implement cyber plans as well as help them manage and recover from data breaches.

Jodi Daniels 0:58

And this episode is brought to you by let’s very loud today, Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media. In short, we use data privacy to transform the way companies do business. Together, we’re creating a future where there’s greater trust between companies and consumers. To learn more, and to check out our new best-selling book, data reimagined building trust one bite at a time, visit Ready to chat, a little AI Governance today?

Justin Daniels 1:41

I was thinking maybe we could talk about how maybe we could use ChatGPT to help you select the dress since you’ve got one up on the screen here. Well, I

Jodi Daniels 1:50

have a formal wedding to go to. And of course, nothing in my closet sits that I like, but that’s not true. They fit but I don’t like them because I’ve worn them. So I want a new one. You have a closet full of nothing. That is what I meant to say I have a closet full of nothing to wear. They all fit and they’re all lovely. But they’re all boring, because I’ve used them already.

Justin Daniels 2:08

But yet, you could wear them. It’s just

Jodi Daniels 2:11

this is why we don’t discuss fashion. This is not a fashion podcast. We used to go to AI which you do know something about?

Justin Daniels 2:21

Indeed, well, why don’t we introduce our guests today? So today we have Dr. Emre Kazim, who is the Co-CEO and Co-founder of Holistic AI, an AI governance risk and compliance startup focusing on software for auditing and risk management of AI systems.

Dr. Emre Kazim 2:41

Hello. Wonderful to be here. Really appreciate the invitation and really keen to engage.

Jodi Daniels 2:48

Absolutely. So the very first question is, tell us a little bit about how you got to where you are today, including residing in London.

Dr. Emre Kazim 2:58

Awesome. Yeah. So I was actually born in London. I’ve always kind of been based in London. And I just, I also went to university in London, and I went to a university called University College London, which is right in the center. And I did my academic studies there, my undergraduate in the sciences, I was actually studied chemistry and then physics. And then I just completely switched disciplines, when I ended up doing a PhD in philosophy, and I did my PhD in the philosophy of ethics is more specifically a German philosopher, very old German philosopher called Kant. So after I had done that, because I had a kind of background in both the sciences, as well as kind of ethics and philosophy. There was lots and lots of interesting things taking place in the computer science department of UCL, the university, I’d also done my undergraduate in, and I had a friend there, and they were like, look, you know, we’re really interested in building out multidisciplinary teams and bringing in someone with different expertise from the engineers, do you want to just come and join some of these projects. So that’s how I ended up working in the University College London computer science department without an engineering background, and we would just do lots of interesting stuff. In the area of trustworthy AI or AI ethics started to emerge as we saw more and more high profile cases of harm, you know, like, this is algorithm that’s horribly biased, or there’s this algorithm that’s not safe and things like this. And then just like working with engineers, and one of the engineers was Adriano Koshiyama, who’s my co-founder. And we were formed a friendship. And out of that kind of relationship, as well as the work we were doing together. We said, Look, you know, this is way too important to remain as some kind of research program. We’ve got to spin the company up. So really, it was just out of curiosity, and exploration and experimentation that ended up in this space. I’m really glad to have met a journal and that’s the kind of came out of research. That’s how we and then the company just kind of just really grew and developed the To the back of that,

Jodi Daniels 5:01

very interesting to see how you started in the academic world and took, you know, problems studying and be able to identify actually, there’s significant application in the, like I say real worlds but in the corporate Yeah,

Dr. Emre Kazim 5:15

yeah, exactly. Yeah. In the real world. Can you study? Can you study? If you studied medicine? Would you do it in a library? Or would you do it in a hospital?

Jodi Daniels 5:26

Right? Probably need a little bit of both. That’s where you talk

Justin Daniels 5:37

to kind of getting the ball rolling in our AI discussion is what are the challenges you see today about how companies are starting to use AI?

Dr. Emre Kazim 5:46

So I think really, at the core is lots and lots of companies are aware of the opportunity. So the way we see it is that if you don’t, companies are aware that if they don’t automate, then they won’t remain competitive, and they just won’t be able to be at the kind of cutting edge. And at the same time, if they do automate the introducing lots and lots of risks to the business, some business risks, reputational damage, we’ve got lots of compliance coming along. So really, I’m think it’s the tension between the desire to be innovative, a first mover, a dynamic in your and creative in your kind of business practices, with the kind of risk averse side which is okay, that’s great, but what the hell is, what the hell are these language models doing? Or how can we be confident about what these systems are producing as a result of their actions? So really, is a problem of confidence, I think,

Jodi Daniels 6:40

you said before how in the research environment, you were running up against a couple of situations where you could see that there was bias? Can you share maybe an example. And we don’t need the specifics, but just kind of the types of bias that you were seeing. So let me give you some

Dr. Emre Kazim 6:58

high profile cases, lots and lots of examples of this. So one was that Amazon was using a algorithm to sift CVS, so a lot of these, so they were just so you can imagine 1000s of people must be applied for jobs at Amazon. And, and the algorithm was just like being trained on historical data. And then it was being shown or it was being claimed, or allegedly, if you will, that the system was systematically excluding female candidates, as well as candidates without Anglo Saxon names, or various other kinds of claims of bias. But I know that the female dead exclusion of females was one of the most high profile examples of that. So people are saying, oh, that’s just just not acceptable. You know, you can’t have an example of this of an algorithm that’s excluding women. And the reason for it was probably because historically, they’re the candidates that have been hired, the algorithm was probably just replicating that experience. But yeah, horrible case of bias there. Another example is the use of algorithms in insurance. So in America, you know, health and life insurance is a sensitive area. And if you’re using algorithms to underwrite or score and things like that, people really do want to know that those systems work. Currently, we’re seeing actually some examples of people putting lawsuits in regarding this other areas of things like criminal justice, so there was able to use so algorithms were developed that could, that could be used to help judges determine how long a sentence to sentence a criminal that had been prosecuted, or found guilty. And they were found to be bias. So you just got an example after example, there the high profile cases of bias, but there’s other examples that are nothing to do with bias, but are still important cases, which are things like, for example, people are concerned about manipulation. Now, it just depends on where you are in politics, but all these kind of discussions about manipulation on the democratic process, via nudging, and other kind of officious algorithms. So, like a litany of examples of like, you know, algorithms gonna mock, or claiming that algorithms have gone amok. So we’ve just, you know, it used to be, you know, I was just actually presenting to some lawyers. And it used to be when we got going in 2018. It was like, headline of the month. And then like, as time passed, it became headline of the week. And now you know, even the building that I’m speaking to you and when I walked in, back into the office today from a meeting, I could see all the news, some discussion about AI, and so on. So it’s, it’s gone from something that was kind of drip drip to like everyday news, dinner table conversations about the both opportunities, but more generally, the potential harms of algorithms. So that’s really it’s become these high profile cases upon have really accelerated the concern in this space.

Jodi Daniels 9:57

Those are great examples. appreciate you sharing as.

Justin Daniels 10:01

So, as you talked about with companies trying to figure out how to be cutting edge, but yet not introduce systemic risks to their business. Can you talk to us a little bit more about how you envision with your product AI governance, working because at least when I’m dealing with it now, people are putting together cross functional committees. They’re trying to figure out what do we do with it hallucinates? So can you elaborate on how you’re approaching AI governance with what we know right now?

Dr. Emre Kazim 10:33

So I think the first thing is, and I guess, you know, you’ve probably assumed, but compared with this is a lot of it is discovery. So, you know, really simple question about just saying, you know, have you got an inventory, or registry of systems? Do you know, what’s actually did you have visibility across the business, in terms of the use of these algorithms, there’s, so depending on the size of the company, there’s lots of lots of different business using units doing lots and lots of different things, sometimes experimenting with technology, sometimes procuring them, sometimes developing themselves. So I think really the kind of first step really the most important first step is a kind of visibility. So what I found, I probably have shifted away from the idea of having a kind of ethics community or something ethics. Like group, I used to kind of little bit of trolling here, kind of the council of the virtuous that decide, you know, this is good, or this is bad, more looking at processes, thinking about visibility and policies that that can be done in a systematic and scalable way. So I think probably the single point of departure is probably visibility and discovery, you know, what’s what’s taking place across the business?

Jodi Daniels 11:45

AI is happening, or AI discussions are happening across the board and businesses, and they’re also popping up on privacy and security teams. Where do you believe AI governance fits alongside privacy, security, and then just overall risk?

Dr. Emre Kazim 12:04

So I think the first thing to think about is I’m absolutely convinced that we had what we have cybersecurity, which started off as something peripheral became a broader concern. And then with mass adoption, we’ve got this cybersecurity industry. And then the second wave was data protection, again, ubiquitous use of data, huge, you know, data, lakes and control, and then the collection of personal data, then the harms, with the kind of adoption of that, and the proliferation of that, and we’ve got the data, data protection, ecosystem market regimes, within that phase with AI, we’ve got this kind of now that we have the early adopters, we’re now starting to see the proliferation and ubiquitous adoption on these systems. And we’re going to see AI as the AI governance or risk management as another standard process along with privacy and security. So I think the best way of thinking about it is actually as analogous to those. Just an interesting point of reflection. In Europe, it feels to me that a lot of the community that are interested in AI come from data protection, whereas what I experienced quite a bit in the states is that a lot of the community come from cybersecurity, which is really interesting for me. But I guess data in the States is also not just privacy in the sense of, you know, personally identifiable data, but there’s also a closer relationship with security. So it may or may not be, you know, directly it may not be so stark, but I’ve noticed that as well, I’ve noticed that the kind of communities that I’ve been that there’s a path of dependency on the kind of place that someone started with to their interest in AI. I think another thing to consider just to quickly is that AI can introduce novel security risks and novel privacy risks. So even if you’ve traditionally had a good data protection regime and robust cybersecurity regimes, you know, AI is a new form of risk for those verticals as well.

Jodi Daniels 14:10

I think your observations really fascinating between Europe and the US. And I, I’m just kind of thinking, perhaps because privacy is truly this fundamental right, that has security as a subset as a, it’s just like, ingrained components in the EU. And here in the States, we’re, we’ve been much more focused on the security side, and we’re a lot slower to move in the privacy side, we’re catching up we have California a variety of other state laws. But that’s interesting. You say that because I do see, you know, privacy professionals in my community who are focused on it, but at the same time, so many of the articles are really focused on the security side. And I was going to ask you shared that it introduces some novel risks. Can you offer a couple of examples of each of those?

Dr. Emre Kazim 14:57

Yeah, so first, I was looking at into was interesting for me to see a lot of the discussions about the language models and how they can be used to pro or strategize ways in which you can compromise the security of a particular system. So that’s not just in time. So it’s not just in the use of you can use algorithms to compromise security, but also it just in the strategizing and learning and the access to that kind of knowledge. So not open AI, for example, have deliberately done a lot to restrict the use of their systems to do that kind of strategizing. And that kind of processing. That’s the previous side of what I’m really getting at is you can have a regime you can have as a system, which is well, from a data protection perspective, while governed, you know, it satisfies a European GDPR, which, to the best of my knowledge, the California version is very close to, and it’s like, great, we’ve got a good defendable data protection regime. But then what do you do with that data? You know, if you use if you use particular kinds of machine learning on that system, it can reproduce kind of results, or we can see leakage or exposure of data from the otherwise well governed system that we wouldn’t have. So introducing or using particular kinds of systems in actually generally from a data protection perspective where government systems can introduce novel privacy risks. So that’s really what I meant.

Jodi Daniels 16:29

appreciate you sharing Thank you always very helpful.

Justin Daniels 16:35

I guess kind of turning the discussion a little bit towards one of the things I’ve been trying to grapple with with clients is, if we use AI to create some novel expressions, some novel say poem, how are we able to come up with a regime to trace the provenance or origin of that expression so that it’s not infringing of some third party? Because AI has been learning from information and could be infringing? Have you given thought? Or how, what is your approach with that?

Dr. Emre Kazim 17:15

You know, honestly, I don’t know. I’ve seen like two different camps, I’ve seen the fair use camp. And then we’ve seen the counter arguments for that, where the photos come see, look, data’s is readily available. It’s been used in this in a particular way, where the thing that you’re generating is genuinely different, or creative in a way that can’t just be reduced to the, let’s say, the source data or what is trained on. So I’ve seen different arguments around it. What I will say in the areas that we probably do know better is that Providence, they’re not just in terms of the data itself, but generally, in terms of how a system is coming to its conclusions, is absolutely necessary, both from a legal perspective, and as well as from just kind of a good practice business practice, in particular use cases. So if you’re using a system, the poem is a curious one, because it’s almost the kind of it’s an aesthetic judgment. So it’s difficult to be able to judge that from a do we need to provide an explanation are all pilots in some way? Learning from their peers? You know, if we write a some kind of sonnet of, you know, do we have to credit Shakespeare in that context? And then there’s specific Shakespearean play whatever Hamlet or something that’s a different kind of question or curiosity to we’ve done a used an algorithm to credit score. And you need to be able to explain why, you know, God wasn’t wasn’t was was rejected, or given a poor credit score. And you think, then why, you know, Justin was given a certain amount, or whatever it is. So there are examples where an algorithm will make a result, which has a direct impact, a tangible impact on a person’s life prospects and their rights, both civil and fundamental. And then you have to be using systems that you can explain. So they’re that argument, and that we’re when we’re using AI was just an umbrella term. You know, there are some systems which are just intangible that we just can’t explain. They’re just so complicated, such as the large language models. And then there’s other systems which we can readily investigate to say this is how it’s come to its conclusions. So I would contend that if you’ve got a situation where you’ve got a moral and legal obligation to provide explanations, to not use systems that are inexplicable.

Jodi Daniels 19:49

Where do you recommend companies start?

Dr. Emre Kazim 19:53

Come and pay us money? Is really just going back to that Learning about discovery, take a stock, you know, like, dude, what’s happened to what one doesn’t want to do is every single time introduce a new process, and then, you know, just take stock or be aware. For example, it may be the case that the data privacy individuals actually have got a good grasp of the use of machine learning within the business. There’s a lot of the time these all kind of sister processes. So it’s really the discovery phase to get going by saying, Okay, do we know what’s actually happening in this business?

Jodi Daniels 20:31

Now, once they get past the discovery phase, tell us a little bit about what your company does, and when they should come and pay you money.

Dr. Emre Kazim 20:38

So we basically do AI governance risk and prepare them for compliance. So from the governance side, what we’re doing is providing is we’re a platform solution. So you can do this in a systematic and scalable fashion. But as companies have got often hundreds of systems across the business, so it’s not really looking at one or two systems, and but really being able to have good governance or command and control regime across the business. So the first thing that we’re doing is really just about AI inventory and registry, and being able to say, okay, look, here’s your state of play. Here’s the C suite, do a bird’s eye view of what’s actually taking place. The second thing, the risk component is to say, okay, given a company’s risk appetite, or risk posture, you know, less risk, assess the algorithms, let’s actually see relevant agree, look at how they work and be able to see, okay, what is the inherent risk of these systems, and then consequentially to say, okay, when we, when we’ve identified risks, let’s actually go and investigate those risks, and then look for mitigation strategies, which we don’t do ourselves, but the company would do themselves, and then say, okay, look, you know, acting off the back of those mitigation strategies, then we can just start telling a coherent story about here’s our state of play, here’s how we risk assess the algorithms. Here’s the mitigations, we took, and here’s the assurance. So that’s what you can do all of that off the back of the with the holistic AI platform. And that’s really how we were building it and building it out. We’re enabling companies to have a single pane of view, an audit trail documentation, and robust best in class risk management, Reggie.

Jodi Daniels 22:18

Thank you. company should explore learn more. Hi, are you laughing? You’re in a very laughing mood.

Justin Daniels 22:28

So now I want to talk a little bit about regulations. So once again, the EU is leading the way. And as you are building your process or your product, to what extent are you paying attention to the EU regulations to come up with the kinds of, as you said, assessing, assigning a risk score and mitigation based on an on this EU law that’s pretty far along.

Dr. Emre Kazim 22:56

So we’ve been tracking it very closely. So I’m calling you from London here, London is a two hour train ride to Brussels. So I’m just constantly glowing, and coming meeting kind of that kind of community, we’re very active and moving forward in this space. So we’ve also been publishing regularly comments and other kinds of updates from that regulation. So it’s a core concern for us. And we’ve been structuring our solution around the EU AI act. So one of the kinds of assumptions we have is that the EU AI act will become the equivalent for AI. So what GDPR was for data, so we think it’s going to become the global de facto regulation in AI. And, and we’re kind of anticipating that evening in non EU states, or countries like the US, big enterprise will still be looking to meet the gold standard, which we anticipate will be that legislation. So it looks like it’s allowed me to bash my European brothers and sisters a little bit. So you can always rely on European inefficiency. So even though it might not be able to supposed to be passed here or there, it’s going to take a bit longer than I think people think it’s going to take, but it will be parsed eventually. And then I think they’ll probably be a grace period of about two years. So we’re looking at 2025 for when it really lands, but it’s going to be lots and lots of activity beforehand. And it takes a long time to to work out probably what what are you guys doing, you know, to work out? What is the relationship between the GDPR privacy regimes and these new AI regulations?

Justin Daniels 24:38

GDPR just have a birthday.

Jodi Daniels 24:40

It did have a birthday. It’s entering kindergarten next year. Okay, turn five.

Dr. Emre Kazim 24:48

Look how that impacted the market. Yeah.

Jodi Daniels 24:51

It’s interesting though, because if you think about 2025 what we’re experiencing now, two years feels like 20 years of innovation that can happen in that period of time.

Dr. Emre Kazim 25:04

I couldn’t agree more. It what’s really interesting from from an analysis perspective is to think about how yes, we’ve got some kind of regulatory or other kind of policy standards in cybersecurity, but it’s just so fast moving, that you’re just constantly seeing updates and best practice being evolved. And it’s going to be interesting to see what happens with the EU, and other kind of AI regimes where you’re going to have this static regulation. But you’re going to have practice and innovation just like exploding around us. And just to think about, what does that mean for for regulation, which is, by by definition, kind of static in that way.

Jodi Daniels 25:45

Knowing what you know about privacy and security in the communities that you’ve been participating in, we always like to ask our guests, what is your best privacy or security tip?

Dr. Emre Kazim 25:58

I was gonna make a joke because I close my curtains at night. Like in a flat repo flats apartment, that’s like perfect view for this whole building in front of me. And I realized, as I sit there, with my belly out watching the television, it’s probably a good idea to just get used to closing your curtains. So in a very not in analog sense, that’s that’s the advice, I’d say always, best practice to pull your blinds or draw your turret curtains.

Jodi Daniels 26:26

It works. I’ve talked many times, I’ve used actually that analogy many times from a privacy and security point of view, because you can have the best alarm systems. But if all the curtains are open, then everyone you don’t have the privacy if all the curtains are closed, but the doors wide open. Right? No alarm, anyone can just walk on in. You need. You need both. I know it is exciting. I’m so glad you agree.

Justin Daniels 26:54

Not with you. But when you’re not working on AI or taking the train to Brussels, what do you like to do for fun?

Dr. Emre Kazim 27:04

You know, I love swimming. But I love swimming in the sea. So I was shocked. I’m shocked at how cold the Pacific is. Next to you guys is just like Jesus Christ, you know. So there’s a real tension of when I’m out there, desperation to jump in. But then also that kind of moment where you’re just like, oh my lord, so, so I love swimming, I really enjoy swimming. And you know, funny enough, I miss reading. You know, when you were when we were academic, you know, we’ve got that kind of inclination to read and just like indulge and even when you’re really building a business, you know, we’re 50 Something people now the company continues to grow. And it’s just like, you know, you just realize, you know, that we’re there took a step back, spend a weekend, just like reading these articles and realizing wow, you know, I actually miss reading so I guess, in my fun time, however, this may sound bookish, I enjoy reading.

Jodi Daniels 28:13

You are talking to a very happy reader over here. I had to get rid of my books so that there was enough room on the bookshelf for all of Justin’s books. But what are

Justin Daniels 28:25

you reading Justin? What are you into? Part of it is not fun part of it is I’m reading a lot to understand AI the EU regulations so that’s one part and then other parts are I just read a sports book. I’ve been reading some other like techno thrillers so I kind of mix it but like you to go to bed the last 20 minutes I pick up something and I’m reading because to your point, it helps me absorb all different kinds of information from fields that just either are fun to learn or may help me in my

Jodi Daniels 29:05

work. It doesn’t mean to get to get rid of the couple 100 history books that you

Justin Daniels 29:11

know you can get rid of any of those. You

Dr. Emre Kazim 29:15

thought you were gonna say Russian 18th century or 19th century Russian literature?

Justin Daniels 29:20

No, I I’ve read about Napoleon and his little trip and I haven’t read it. I haven’t read that lately. I honestly you asked me about that part of the world. I’m fascinated by the complete revolution in drone technology because of what’s gone on in the Ukraine.

Dr. Emre Kazim 29:38

I mean, yeah, revolution. Yeah, absolutely.

Jodi Daniels 29:41

Well, we’re gonna have our book podcast as a special bonus episode. I’m sure it’s on time. People would like to learn more and connect with you. Where should they go?

Dr. Emre Kazim 29:50

Holistic AI, spelt with a H at Also Emre Kazim on LinkedIn. Just reach out. I’m more than happy to engage. Yeah. So just get on to our website, you’ll see that our blogs, our papers goes back. So even our research papers are there, got lots of notes, we’ve got open. We’ve got an open source library for basic debiasing algorithms. So that’s the kind of center of communication for us.

Jodi Daniels 30:20

Wonderful. Well, we’ll be sure to include that in the show notes. We’re so glad that you could join us today. Thank you so much for your time.

Dr. Emre Kazim 30:26

Really appreciate it. Thank you so much.

Jodi Daniels 30:28


Outro 30:34

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.