Click for Full Transcript

Intro 0:01

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.

Jodi Daniels 0:15

Hi. Jodi Daniels, here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.

Justin Daniels 0:34

Hi, I am Justin Daniels, I am a shareholder in corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.

Jodi Daniels 0:55

And this episode is brought to you by boop Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business. Together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com, well, today we are sweltering in the summer heat wave. This is when we’re recording. So if we melt, we’ll just have to get some ice water to and I am actually still drinking hot coffee this morning. But today we have Anne Bradley, who is the Chief Customer Officer at Luminos. Anne helps in house legal tech and data science teams use the Luminos platform to manage the Automate AI risk compliance approval processes, statistical testing and legal documentation. She also serves on the board of directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship advancing principal data practices in support of emerging technologies. And we’ve had several FPF folks on our podcast as well. So big supporters of FPF and welcome to the show. Thanks for having me. Well, we are excited that you’re here. Yes, let’s get right to it. That’s your job.

Justin Daniels 2:32

Okay, so, and why don’t you tell us a little bit about your career journey?

Anne Bradley 2:35

Sure, I think my career journey probably starts out as a kid growing up in Cambridge, Massachusetts, where I was kind of an engineer kid. My dad went to MIT, and Cambridge was a very nerdy place to grow up, where everyone thought innovation was going to save us, and that is definitely a big part of you know, my tech optimist origin story. I went to college and became a software developer, worked in the software industry for a while, and then was really activated by the IP battle of the time, which was Grokster and Napster. And it was about whether software developers should be held liable for the conduct of their you know, users who were sharing files. And it’s interesting to be in this time where you know so many of those questions about platform responsibility are back front and center in our legal conversation. But I got excited by that. Went to law school, moved to San Francisco, became a lawyer, interned at the Electronic Frontier Foundation. Was a true sort of tech utopian, tech Libertarian kid from Cambridge of my time, even in high school, I had a Larry Lessig poster on my wall which tells you exactly how sort of information must be free I wanted to be what’s funny is I got out of law school and had the debt of law school, so I had to learn how to practice law and make some money, and ended up joining IP firms and doing early internet law work that was pretty fun. In the early days of the Internet, there was a lot of, let’s say, like, salacious characters making money on the internet, so I worked for like, early dating sites and did cyber squatting law and lots of early internet technical law, because it was sort of fun and nerdy and interesting and I could use my tech shops. Then went from law firms where I was doing that to working at Hulu, where I was one of the early lawyers there building the first privacy and cybersecurity practices at Hulu. Moved on from Hulu to Nike, where I was the chief privacy officer for about nine years, building the digital legal function, the privacy function, collaborating with the cybersecurity function, and ultimately, you know, working on e-commerce and AI. Uh, oriented projects, all of our wearables and consumer facing digital apps. It was very fun, very stressful, global responsibility. Spent a lot of time on the road in that job. I am loving the sort of new formation and stack of my portfolio career working in software, helping people who are just like my former roles figure out how to manage the AI inside of their companies, and how to manage these technology platforms and their power. If I weren’t doing it as a job, I would probably do it for fun, because I am just that animated about these issues, and it’s an exciting time to be practicing this law, because there’s so much going on.

Jodi Daniels 5:46

I love that story, and what stood out to me was the tech optimist. I haven’t heard that phrase, really, and I love that.

Anne Bradley 5:54

Well, I think that, I mean, it’s one of the things I love about The Future of Privacy Forum as well, is that it’s always been a think tank enabling technology. It’s not a think tank about shutting down technology. It’s about embracing the future, finding cool ways we can embrace it and adopt it while, you know, being safe and thoughtful about what the adoption of technology means for humanity. Sometimes I think back to my dad reading me I Robot when I was 10 years old, and it’s like such a formative part of even the you know, questions we’re asking today about what autonomous vehicles should be allowed to do. I’m living in California, where we have Waymos driving around LA, but I read this week that Tesla’s self-driving taxis are now driving around Austin, and it will be a really interesting example. I mean, the Waymos are much more expensive. They have a whole bunch more technology on the outside of them. As someone driving around, you can see that they’re like, have a lot more sensors and signals. Are they going to behave differently from the Teslas as someone driving next to the way most first, I was creeped out by them, but then I was like, You know what? They’re cute. They’re deferential. They drive really nicely. They let you in. They’re nicer than LA drivers. So it’s weird, like the personality that they gave to that car and its obsession with safety kind of was a PR offense. I’m curious, because I’ve heard there’s been some problems with the Teslas in Austin already about if the opposite will be true. You know, it’ll be interesting to see as this automation rolls out. Like, how does the personality and the risk aversion in the commercial marketplace give us an impression about the company and make us think, I either want to be part of that or not? I think that’s like, if you’re a tech optimist, you think the marketplace has some ability to address these things, and you get excited for the marketplace showing safe cars, like it feels like the way most in LA are pretty safe that they’re, you know, loved their parents in LA who want to send their kids only in Waymos. They don’t even want to send them in an Uber.

Jodi Daniels 7:55

Which is so interesting. I was recently in New York City and thought this exact conversation, yeah. And was wondering, okay, so will the self driving cars know how to go around, you know, all the trucks that are stopped, and then does that mean they’ll be less honking and it’ll be a quieter city, and does that mean that they won’t sit in the middle of the intersection?

Anne Bradley 8:18

Well, right now, exact thought, all the squishy behavior of people in the driving landscape is really stymieing All of the self-driving cars. There are hilarious videos of them getting, like, stuck in parking lots just trying to turn around and stuff. But I’ll just say as someone driving next to some Waymos, and this is not meant as a product endorsement of them, per se, but making them nice and deferential and safe drivers was definitely a good move for this mom, like, I’m fine with them being on the road when I see them now, at first I was creeped out, because I’d look over and there’s no driver, and that’s scary. You’re like, why is there no driver in that car? But then just their nice behavior and their sort of risk aversion has won me over.

Jodi Daniels 8:57

Well, I hope it stays that way. So let’s go to a different industry now, and AI is being applied across industries, really all different all different industries in a lot of ways. So aside from self driving cars, what are some of the most interesting and impactful uses that you think you are seeing today?

Anne Bradley 9:16

I mean, I think that there is a whole category of stuff we’re not talking about that relates to defense and weapon automation where AI is being applied, which is funny, because we’re having conferences about these abstract future big risks of AGI. But as someone who, like grew up, as I told you, with I Robot, to me, we have killer robots right now, and we probably should be having a bigger discourse about that, and where there’s automation happening, you know, inside of defense, war making and decisions that have to do with like human survival. So that’s one that just feels big and like we’re kind of missing it. I also think the applications of AI that are more classical AI are interesting, right? Now we’re having breakthroughs that just come from big compute using methodologies that are kind of old, like, you know, predictive models. And so that’s pretty cool, seeing things like, you know, cancer diagnosis, not when someone is like, uploading their image to ChatGPT of their scan, but cancer diagnoses when it’s being used as assistive technology at the Mayo Clinic, and it’s not being done by an LLM, it’s being done by a predictive model that’s trained on cancer images and that is really good at making cancer diagnoses. So sort of classical, deterministic, trained models at scale have me excited. I think those are very cool. I like the uplift that large language models are having on people just being able to communicate. So, you know, especially people who don’t have great communication skills, whether it’s English as a second language, or for some other reason, it’s not that easy for you to communicate. I have a dear friend who read me email that she had written to her apartment Co Op about a situation that had occurred to her, and she’s reading it to me, and it’s kind of like she’s calling her friend the lawyer to check on it right, to make sure it’s good before she sends it. And she’s reading it to me, and I’m like, What is this? This is so good. And as she’s reading it to me at the end, I was like, did you use Chat GPT to clean this up? And she’s like, Absolutely, I fed it, and then I told it to do this, and I told it to do that, and I was looking at the output, and I was like, This is my friend who’s very smart and capable, but never could have written something this sort of legally styled and seemingly authoritative, as she did using these tools. So I love the way that the language models are kind of democratizing access to professional language for people. I think instant translation is amazing. I had a whole conversation with an Uber driver the other day who did not speak English, and we use Google Translate. So I would say my thing and then play it to him, and he would say his thing and play it to me, and that, or I was, you know, running the phone, but we were having like, a simultaneous translation that, you know, when I was growing up, you would need to be a United Nations person to get, and we were getting that in the back of an Uber for free. So I think those things are all really cool, exciting innovations that people can be bullish on. And if you’re kind of a AI portfolio investor, that’s where I would be placing my bets in terms of seeing the most sort of big innovation. So the application of LLMs to language, which is its perfect use case, is a great one. I think as we get further away in terms of the way that technology was designed and how it’s being applied, that’s where we get into higher risk situations.

Justin Daniels 12:47

So speaking of risks, you know, companies rush to adopt AI. Where do you see the greatest risks emerging, privacy, ethical and operational standpoint?

Anne Bradley 12:59

My biggest one is the deepfakes. Yeah. I mean, that’s a big deal, because people are already in a, you know, down cycle around trust. So I’m sure, you know, Jodi, you’ve been in the counseling in the privacy industry for a long time, and every year we see these trust surveys come out, and every year, institutional trust is in decline. So you know, people are already distrusting institutions, and then now we’ve got deep fakes making them sort of distrust their own eyes, and there’s a climate of tolerance for inaccuracy that you know, for me as a scientist and a lawyer, is quite painful to live inside of. You know, I’m just like, that’s just not how it works. That’s not how it works. I spend my life saying that, but I think that there is opportunity around technology standards to be much more on the offense about deep fakes. And honestly, having worked as a global lawyer, I think there are other regimes in other countries where they sort of put hurdles in front of you in order to operate as a business, that you just have to do, filing requirements, registration requirements, labeling requirements, that are quite effective inside of those regimes as ways to get people to start to move towards a path of adherence to what you want. So for example, you know, I don’t think we should have crazy AI laws, but AI laws that require you to label and to embed watermarks and other technology devices into synthetic images. I think I’m in favor of that. That makes a lot of sense, and I am not the type of anti law person who’s worked inside of companies and just thinks every law is a burden. To be honest, it’s perfectly fine to have to do certain things, you know, at at Nike, I remember when the FTC said we started to have to label influencer content with hashtags about it being an ad, you know, and promoted content. And that was challenging to, you know, deploy the information of exactly what the standard was going to be and how we wanted everyone to do it as a company operationally. But I think those kind of techniques where you just set a minimum of bar of what you expect, we expect people to label these things in these ways, and then if they don’t, then we’ll come after them. Seems like a reasonable regulatory framework to start with, and we don’t have to go berserker on all of these AI verifications. If there are things that like, I don’t know, I think we would agree on right now about deep fakes that they you know, you should be able to detect them more easily than you can at the moment, and people should have more proxies for understanding what’s true or not that are also trustworthy.

Jodi Daniels 12:59

Justin, is there anything you want to add? I know you have lots of thoughts on the deep fake risk. Yeah, I’d love to hear what you think about it.

Justin Daniels 14:52

I guess from my perspective, you know, it’s one thing for people to understand it, but when I do some presentations, IBM did a deep fake of me, yeah, and it really resonates. But you know how you talked earlier about the military, and I think it’s intentional that people don’t talk about AI in the military, because I think governments don’t want you to think about that. Yeah, but you know, if Taiwan, or if China decides, in, you know, you’ve traveled the world, you understand some of the geopolitics that go on, if they decide they want to go across to Taiwan, why wouldn’t you use deep, you know, deep seek and Tiktok as kind of your misinformation platform and get that out there, because people just watch stuff, and we just live in a time where people really aren’t thinking, and these things are so effective now, because it’s so hard to tell the difference, and I just don’t think people have a recognition of that. So to your point, and maybe having some of these watermarks and whatnot would be helpful, but I just don’t know what you do with AI that can create looks exactly like you. Or think of the CEO of your company, and someone wants to bad mouth them, and they haven’t. They put a deep fake out and put it on that platform. By the time you guys, ever put some kind of plan in place to combat it, the damage is done. What do you do?

Anne Bradley 17:22

It’s out. It’s scary, and we’re going to start to see more of it, and we’re going to lose trust. And having authoritative sources that we trust is going to be really important over time. You know, it’s interesting, because maybe eight or 10 years ago, when I really had a loss of trust in social media around the Cambridge Analytica event, and sort of seeing how manipulative it was, I turned to traditional media and subscribed to a lot of newspapers and magazines and sort of long form content. And now even those sources feel equally confused.

Justin Daniels 17:59

But I guess, and I don’t want to monopolize it. It sounds like I am, but, I’m interested. Well, so we, you know, we talked about our kids, yeah, and so I’d be curious as to your take. So I did a presentation at an AI week where I said, you know, as a dad, now I have to tell my daughter, you have to be really careful what you’re doing online, because now, if you have a you know, if you and a friend have a falling out, or you and a boyfriend have a falling out instead of getting mad, well, what if they just take your face, put it on the naked picture of somebody else, and they get that out there in the social media channels?

Anne Bradley 18:36

It’s happening. I mean, devastating. There’s a reason that you’re talking about that. And I’ll tell you, I think. And here’s where, like, we probably share a perspective. I think there’s a reason that’s also something people are not talking about. You know what? I mean, we’re talking about these existential, far away risks, these large AGI institutes for safety around issues that might happen in 20 years. And at the moment, we’ve got, like, you know, some autonomous drones making weapons decisions and teenage boys with at their finger capabilities to create deep fake porn. Yeah, and like, those are real AI harms that are really here now, not to mention, you know, the discriminatory impact of selection algorithms on people which kind of lives at the intersection of what we all do. If it’s a, you know, if it’s a private set of data, privacy type of data, and it’s being used by AI to help decide if you should go to the next level in a job interview, if you should be approved for housing. All of that kind of stuff is a really big deal. It’s happening right now, and I think it’s understandable why companies want to use tools to help them drive productivity and automate in these areas. And some of those optimistic things we were talking about, about what AI can do, make you really excited about it, but you. Um, the downside in terms of what’s happening right now can be shown and it’s proven. And so if you just race head first into using these tools without being thoughtful about the consequences and without adopting mitigations and controls to help manage them, it can be pretty bad. And I think that’s why you know, as experts, we’re all feeling kind of nervous and unsettled about this time and about AI risk.

Justin Daniels 20:24

And I guess, and maybe that’s why, and I know we’re about to talk a little bit more about your current role, yeah, is, you know, you said at the outset, you know that you’re a technology optimist, and you and I talking as lawyers, I have to contend with AI, because if I don’t stay on top of it, I will be at such a competitive disadvantage to do my work. It’s unbelievable. But at the same time, and I’d love to hear your thoughts on this, you know, we watched what happened with social media. I listen to what the you know, Zuckerberg and open AI and other companies say is, I don’t think they’re paying any attention to privacy and security and the things that we care about. And so it’s harder for me to be an optimist, you know, What? What? What can you tell me? Or is it really, hey, I’m trying to be an optimist by, you know, being involved with Luminos, because what you’re really trying to do there is create some of these guardrails using AI.

Anne Bradley 21:21

Yeah. I mean, I believe that for the last 20 years that I’ve been interested in studying and practicing in technology law, the technologists have had capabilities that outgun the lawyers in terms of their ability to collaborate and speak the same language. And so like, part of how I can be optimistic around something like Luminos or watermarking tool sets. You know, part of that is just growing up in Cambridge to be totally transparent. You know, it’s basically like growing up in Rome and being Catholic, like you grow up in Cambridge. You believe that technology is going to save and solve every single problem. And there is not a problem that we cannot come up with a technology big enough to solve, you know. So I do have a natural perspective of solutionism, and that is where I’m finding some hope is like, actually, there are quite a lot of solutions that can be enabled that will provide very good tooling to allow people to have fact based conversations about what’s happening with algorithms today, some of those things are not transparent to us because we have a background in a different discipline of the law, you know. But statisticians have well established tools and systems for evaluating for example, if an algorithm is sexist, and those can be employed, and by giving them to the right people, which is the people who are worried about and thinking about risk, I think that there’s a lot of opportunity to just get some reasonable guardrails around some of the most pernicious uses, because these things that we’re talking about, like employment decisions or housing decisions or decisions about your liberty. You know these like predictors of recidivist criminal populations. Like this stuff is real, but it’s also in the emerging legal frameworks, the most highly scrutinized and regulated type of Teck so that is a cause for optimism is that we’re starting to have laws about how you can look at this stuff and see what it’s doing. And I’ve never seen industry, and here’s where the pattern match for me comes in. I’ve never seen industry driven to do anything for consumers’ benefit, absent laws in the tech platform space, it’s just how it has been. You know, I told you I was activated by Grokster and Napster, then I lived through social media and Cambridge Analytica, the betrayal of, you know, girls becoming anorexic, suicidal teens like that’s what we got as society out of the wealth that was grown inside of social media, so I think it’s dangerous, and we’re in a trend towards platform power consolidation that we all have to be careful about. But I’m in favor of reasonable regulation and then people taking like reasonable steps and acting like AI is normal technology, because one of the things I’ve observed over 20 years is that at the seam of implementation, everything is normal technology, like when you’re actually implementing it in a company, everything is just a technology that people have to figure out how to use. They have to learn how to use. How much of a job is it gonna replace? How much of a job is it gonna augment? I mean, I worked in Maine, in the like IT industry in college, helping people with their computers. And I worked with a bunch of guys who worked in factories making shoes, but had been retrained for tech when all the shoe mills closed down, and like at that seam, were a bunch of crusty old Mainers who had been retrained. Trained to do tech support for computers. Who I fell in love with in my first job doing tech support for computers, and those were regular people who had had other jobs before, and so I kind of, I pay a lot of attention to where the rubber hits the road when we’re applying AI, not these abstract conversations, but like, how are you using it? What are we doing? What are the real risks, and how can reasonable people who care about it, and there are a lot of them, how can they mitigate it? What’s tricky is that, you know, although many people fear and are not happy about how quickly AI is being rolled out in society, like if you read the polls, people are not happy. They don’t want their jobs replaced by AI. They’re not bullish about this rollout. At the moment, there’s not a huge amount of legislative energy around regulating, but the cat’s already out of the bag. I mean, you guys have seen how many regulations we have. If you’re a multinational, you are definitely going to have to deal with some amount of governance around your AI in all kinds of high risk and high risk adjacent applications. And so that’s one of the cool things that Luminos gives me a chance to work on, is, how do you govern your AI applications on a use case basis? So really thinking about them based on the specific use cases of how people are applying AI, obviously computer vision in self driving cars is a very different use case, although we would have clients who might use Luminos to do assessments of that, that’s a very different use case than assessing someone for an apartment or for an educational setting. But in all of those cases, like adding statistical tools to your workflow to help you actually trust but verify is sort of the secret sauce, and that’s what we’ve been building at Luminos. It’s also what I think, you know, people who believe in incremental progress and incremental risk management have to try for something. I mean, the other choice is like giving up and just saying, okay, whatever. Use AI for anything, do anything when we know that it’s committing all of these harms already, and that seems just downright crazy. I can’t do that, so I’m still fighting, but I’m fighting through trying to give people practical tools that I wish I had when I was in house, trying to govern some of these risks.

Justin Daniels 27:21

So and I’m curious from your work with Luminos, one of the things I commonly see when I’m helping both the in house legal team, as well as help them advise their business team, is example company comes to me their first AI use case right out of the gate that they’re looking at is one that is customer facing, yeah, and this is their first one. And I’m saying, and I’d love to get your thought on this from a Lumi perspective, is I said, Well, have you guys had the conversation that should precede your use case, which is, talk to me about how you view risk. How do you view maximizing the benefit of AI and managing because you’re not eliminating the AI risk in your company? Because to me, I thought it was kind of crazy that this particular company right out of the gate wanted to have a customer-facing AI use case. And so I’m wondering is, from a Luminos’ perspective, if I came in, or my company came in with something like that, could you say, Hey, we can help you with this loose use case. But let’s back up a second and say, Hey, wait a second. Don’t you need to have the, you know, AI governance conversation of where this use case fits in the taxonomy of what risk you’re willing to accept, particularly if this is your first rodeo out of the gate.

Anne Bradley 28:39

I mean, I think yes, you have all the right points. The question is, is the company mature enough to metabolize that framework and that conversation? And in a lot of cases, it’s not. So that’s one issue is like, can they really have an all up risk management conversation? That it’s hard. There are a lot of different risks. I had an interesting conversation with the Chief Compliance Officer recently for a self-driving car company. And you know, they were talking about the fact that, like, privacy risk is one of many risks that they need to assess within a portfolio of risks, and that the safety ones just have to predominate because of their exact business. So in their business, safety, you know, when he or they are choosing to split their resources among different risks and risk management categories, they have to place their bets on the one that’s most relevant to their company. So I think that’s one dimension. Is that every company has their own context. They may be mature. They may be able to have that conversation. They may not. I agree with you in the abstract point that a customer facing use case as your very first use case for AI adoption at a company feels really high stakes, and maybe not as high stakes as you need to do within sort of a normal maturity curve. But on the other hand. And if the person tasked with implementing the AI system is someone who comes from like an IT type of function, like the CIO or the head of data science or the head of analytics, then they have a lot of incentive to deliver something that moves the needle on revenue, which means that they’re going to be looking for either like a hugely efficient savings opportunity, which will be big and has, like, all the human capital issues with it, or they’re going to be looking for something that’s customer facing. The one that I’m seeing out there a ton, is customer facing chat bots as the front line for customer service. And that seems like one of the AI products that has been operationally productized to a fair state of maturity, meaning that, like, there are vendors out there trying to sell in a front door to your company. That’s a chat bot that you can tune, and that’s one of the ones that I’ve seen and had the same reaction you have. Like, is that the first thing you want to do, maybe not but understanding the business context, you don’t always get to choose the use case. So I think the way Luminos approaches it is whatever your use case, we can help you figure out, you know, what is the lower risk way to approach it, what is a medium risk way to approach it? What can you trust but verify? How can you check it? Can you continuously monitor it? You know? So we have products that help you, from a statistical perspective, assess a chat bot output like toxicity, or is it, you know, using professionalism, or is it potentially hallucinating about things, right? Those are all things that companies want to measure. If, right now they were to say, Yeah, we’re going to deploy this customer facing chat bot and do nothing. I would say that’s a really high risk. But if they wanted to do that, use our tools to do a pre deployment check against 10 dimensions of their company values and how they want the chat bot to talk. Run synthetic data and see what the responses will be and assess that. We can show you a graph where we’re measuring how toxic are the responses that are coming back. How frequently does it act like a lawyer? You know? How frequently does it give you discounts? Are those discounts consistent with the ones a real customer service rep would give? You know, those are all the things people are trying to measure. But my point is just having a tool set where you can measure it before deployment, and then you can do snapshots. You could look at your real chats every month and get a report that shows you, you know, here’s how it’s working. It lies 1% of the time. Can we tolerate that? PS, our customer service reps have error rates of 5% of the time. You know, like, that’s what we have to all be thinking about in these settings. But I love giving the tool set so that the in-house lawyers who have to make these complicated and really margin call type of decisions, will I allow you to do this? They don’t always have the leverage to get the company to go pick a new use case, but they may have the leverage to say, Hey, that’s a high risk use case. We should probably put some of these controls around it or tests in front of it, so that we can make sure we’re not doing anything super weird or dangerous. I think everyone who saw those chat bots giving discounts, and you know, companies being held accountable for those discounts, got a little afraid.

Jodi Daniels 33:16

If you could think of one practical tip for someone listening, who’s received, here’s our brilliant new AI feature product roadmap idea, and we have someone in the company who’s tasked with trying to understand the risk around that. What might you suggest they do?

Anne Bradley 33:35

I would say, you know, really understand the high level strategic goal of your customer. So have a conversation with them, if that’s like and say it’s someone from HR who’s been tasked with driving efficiency in the recruiting process. You have a large company of a lot of employees that person’s been tasked with driving efficiency in the recruiting process, and they come to you with a solution, and you know the worst case is that that solution is super privacy, invasive and creepy and discriminatory, right? Like that’s the worst thing, and you look at it and you go, “Oh, man, what do I do with this?” If you understand what they’re trying to get at, you may be able to encourage them to look for either solutions that are less risky to the point we made earlier about not choosing the most scary thing first, or vendors who just demonstrate the evidence that they can be trusted in a better way. Like, sometimes it’s not that they’ve chosen a totally bad application, it’s that they’ve also chosen, like, the creepiest provider of that application. And so there may be an opportunity, for example, to get into the RFP processes earlier and be able to say, like, hey, we want to check some of these companies credentials around privacy, security and AI as part of the early bid, so that then you know your deciders are getting better information going forward, I think just really understanding the business motivation of your in house customer will allow you to have influence in the best ways that. Strategic. Everything else is, you know, the suspenders, like the belt, is learn to understand your business clients and influence their decisions in ways that are really smart from a balance of risk and reward for the company. But the suspenders is when they tell you they’re doing that. Also test and just check, make sure.

Jodi Daniels 35:19

I like the belt and suspenders approach. That’s fun.

Justin Daniels 35:26

So and when you’re not thinking about all this cool AI stuff, what do you like to do for fun?

Anne Bradley 35:35

I love live music, and with a 13-year-old who also loves live music, we have been having so much fun, you know, picking out concerts and going to see shows of bands that we like. It also is just such a great way, after COVID and in this world of Zoom and Spotify and everything being, you know, platform disintermediated, there’s something really lovely about like just going and being with other people who are hyped for the band that you like and are all excited together and listening to it.

Jodi Daniels 36:06

And if people would like to learn more connect with you or learn more about Luminos, where should they go?

Anne Bradley 36:12

They should go to luminos.ai and they can email me at Anne@luminos.ai.

Jodi Daniels 36:19

Amazing. Well, and we’re so excited that you brought your optimistic view to the show today. And thank you so much for joining us.

Anne Bradley 36:27

Thank you guys. This was great.

Outro 36:34

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.