Click for Full Transcript

Intro 0:01  

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.

 

Jodi Daniels  0:21  

Hi. Jodi Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.

 

Justin Daniels  0:36  

Hi, I am Justin Daniels, I am a shareholder in corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.

 

Jodi Daniels  0:57  

And this episode is brought to you by no one can hear you tap my headband. Okay? Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best selling book, Data Reimagined: Building Trust One Bite at a Time. Visit redcloveradvisors.com. So we are recording on Halloween today, and when we scheduled this, some people said that they might bring their cool pirate hat, and I’m the only one here with a Halloween headband.

 

Justin Daniels  1:51  

I guess you’re a lot cooler than I

 

Jodi Daniels  1:52  

Oh, I’m glad you said that. So I think everyone, if you’re listening, you should go check out the YouTube, though. I guess if you see this on LinkedIn, you’ll find my little witch hut. I started off with Mickey ears, but then I had to give those to the high schooler, and you have nothing. Zippity noodle.

 

Justin Daniels  2:12  

I left my pirate hat downstairs.

 

Jodi Daniels  2:15  

Okay, well, we know where your fun meter is. Alright, let’s get started.

 

Justin Daniels  2:19  

That was actually funny.

 

Jodi Daniels  2:24  

That’s too I’m impressed I have that I am cool and I’m funny, and everyone can listen and heard that.

 

Justin Daniels  2:31  

Well, let’s get focused on our guests. So today we have Khurram Chhipa, the General Counsel of Halborn he which is a leading cyber security company in the web three space, with expertise spanning blockchain security compliance and digital risk management. Oh, he sounds like someone else. I know. He brings a unique perspective to the intersection of law and technology outside of work, Khurram enjoys spending time with family and friends.

 

Khurram Chhipa  2:58  

Welcome. Thanks for having me.

 

Jodi Daniels  3:01  

Well, we always like to start with how people got to where they are. So if you can share your career journey to now, that would be great, absolutely.

 

Khurram Chhipa  3:11  

So I started out at a law firm, worked there for a little bit, and then I navigated the financial services space. Worked there for a number of years, and then I decided I wanted to do work that was more meaningful. You know, I worked at large firms and enterprise institutions, and I had an opportunity to join a startup, which is my last role, and I thought it would be an opportunity to, you know, just work for a smaller team and provide more meaningful contributions. I did that, decided to jump in the wonderful world of crypto. Did that for a few years, and then I landed my role at Halborn, where, you know, I wanted to stay in crypto, but I wanted to explore a different area, and diversity, as you guys know, is a very rapidly evolving and critical space. So love making the move. I had the opportunity to join Halborn, which is a leading cyber security company, and here I am today, eight months in going great, very busy, but busy is good this time of year. So yeah, that’s how I’ve ended up at Halborn.

 

Jodi Daniels  4:20  

Wonderful. Well, thank you for sharing.

 

Justin Daniels  4:24  

So as you alluded to, Halborn operates at the intersection of cybersecurity and blockchain, two of the fastest moving fields in tech. How is AI changing the way blockchain companies think about security risks and threat detection?

 

Khurram Chhipa  4:40  

Yeah, it’s a great question. So I think before AI, we saw this, AI boom, threat detection was more static. You know, you would identify an issue and then you would work to resolve it. I think what AI is doing is is allowing us companies like Halborn to being able to identify sooner. The threats that we are facing. A lot of companies in AI using predictive modeling. I think what that does that has shifted the threat detection from static to adaptive solutions that we’re pursuing. So I think, you know, AI allows them to do things a lot faster and allows us to predict. It almost reminds me of Minority Report, where you’re able to see things happen before they actually occur, right? So, not that advanced yet, but, you know, AI gives us the opportunity to predict, you know, just based on the threats today, where the threats may evolve to so it’s exciting. It’s also, it’s changed the game, I would say, like considerably. I think it allows, you know the threats have evolved, but also, you know, cybersecurity solutions are evolving with those threats as well.

 

Jodi Daniels  5:55  

Your background reminds me of Minority Report. It just, I don’t know, it kind of feels Minority Report.

 

Justin Daniels  6:02  

Ish, you actually saw that movie.

 

Jodi Daniels  6:06  

It was a good movie. I can’t, I can’t quote anything. I’m not a movie quoter, but I do actually see some.

 

Justin Daniels  6:14  

Okay, it’s good to know.

 

Jodi Daniels  6:18  

So speaking about ai, ai models are being used for a lot of different ways. Some of them include audits and generating smart contracts. So how do you see AI changing the risk profile for blockchain application? Yeah.

 

Khurram Chhipa  6:37  

I mean, as you mentioned, you know, the these AI albums, they’re very powerful, but from my experience using them, they’re still heavily flawed, and I think there is still human oversight that’s needed to just keep those checks and balances to make sure they’re they’re being used correctly. You know, like, for me, I approach alums like I treated like a junior associate, right? Like, I haven’t helped me work on matters that I understand, that I can issue spot and find errors, and also, like, I use the presumption matter I’m familiar with, because what I’m finding is when I use llms, as powerful as they are, like they are still flawed, and for me, personally, like I I’m very hesitant to use it for for subject matters that I don’t have a good understanding on. So I think, you know, they’re very powerful tools that can be used, but I still think there’s a element of human oversight that’s needed.

 

Jodi Daniels  7:35  

So can you share a little bit more? Maybe, if you were to think about advice you would offer to other general counsels or security teams. What are those safeguards that you would say? How should someone take the output and review it? Or are there other safeguards that you think teams should put in place?

 

Khurram Chhipa  7:57  

Yeah, it’s a great question. So I think it starts with just updating our playbooks, and you know, for finding critical errors, I think developing a process where there’s some human oversight involved, whether it’s in the beginning of the process or towards the end. I personally think you know, there should be at least today, given that llms are still evolving. There should be some continuous oversight throughout the process of using these llms. But I would start with generating playbooks. And I, you know, as a general thumb for me, like I think if you can’t issue spot, if you can’t spot errors, that you shouldn’t use it for that subject matter, right? I still think, you know, there’s still a significant need to use outside counsel, like Baker, Donaldson, for example, for specific subject matter expertise. But I think, you know, it starts with establishing once, like, where to use it for specific use cases, make sure that the person using it can issue spot and can can spot areas as needed, and then working from there.

 

Jodi Daniels  9:11  

So Justin, what advice would you offer? I know you talk a lot about this as well.

 

Justin Daniels  9:20  

I have to be honest, my thinking is evolving, because one of the things I struggle with AI recently is is AI can help you make you more efficient. You know, get the contract reviewed faster. But is it making you better? Because if it doesn’t do it right, or there isn’t a human in the loop, all you’ve done is make bad decisions faster. So what I’ve tried to come around to arm, I’d love to hear your view is my favorite, and to me, most strategic. Ai use case that I have right now is I actually use it for negotiation preparation, to not only anticipate the other side’s arguments, but to frame it in a way that’s. Tailored to the negotiation style of your counterpart, which could be they could have a big ego, they could be very analytical. They could be an empathizer. And the whole point is, how do you then use AI in the context for negotiation to get a better outcome, which is different than just mere efficiency?

 

Khurram Chhipa  10:18  

I agree that’s, excuse me, I’m fighting a cold today. I think that’s one, like, very powerful use case is to help streamline negotiations. I’ve done that myself as well. I also, you know, a lot of times, you know, companies like, like Halborn, and other crypto companies you’re asked to do very niche provide very niche output. And I’ll use AI to kind of just think through, you know, one example is the I’ll be asked to create niche templates. I was asked the other day to create a niche template a very specific use case. So that’s another instance where I use an LL to kind of type the based on the feedback from the business, how to capture that, and then that just starts the thought process and the flow. So just being able to have I almost feel like, like I said, as a junior associate, I almost feel like I have conversations with my alum where it’s like, Oh, what about this? No, you forgot about that. So it, for me, it helps, kind of get the juices flowing from a thought process standpoint, just to come up with the the ideal end product. But I think I agree, just, I think, for negotiations and very powerful sometimes it’s good to have that feedback to help get the juices flowing from a thought process standpoint.

 

Justin Daniels  11:43  

Because I guess the other thing to think about is, AI, it’s default coding is it wants to be helpful. So what I feel like sometimes you have to do is you have to be very hard on it. I don’t like, you know, don’t be my cheerleader. I need critical feedback, because it always seems to want to be your friend and be helpful to you, when sometimes what you need is sober, detached judgment, especially, you know, for what we do as attorney. So I was curious as to how that may inform how you interact with the LLM, because sometimes you really don’t need a cheerleader. You need someone to really give you that hard nosed, objective look to see blind spots that maybe you didn’t get the first time

 

Khurram Chhipa  12:26  

around. Yeah, it’s funny. You mentioned that because I had a very you know, I was at a conference last week for crypto GCS, and I had this conversation with somebody that I just met their fellow GC and his advice was, based on his experience, that you should be very curt with the LLM. He even recommended swearing in some instances. Just, I don’t know if I would go that far, but just, I think, to your point, just, I think these llms, from my experience, they tend to cater to you and try to, I think they’re quoted to, in a sense, please you and like, give you the answers that you want to hear. But I think to really get the best use out of these albums, you have to challenge it, whether it’s, you know, by swearing at them, or just being very current and challenging. But I do find, from my experience, that the more you challenge it, the better it’s for us to adapt and provide better responses. I’d like to see you swear

 

Jodi Daniels  13:32  

that’s not right, or I don’t want that just I’m not gonna swear.

 

Justin Daniels  13:37  

Really, I’ve shamed it. I wanted to draft a clause one time chrome about, you know, machine learning, and it wouldn’t do it. And I said, Well, if you won’t do it, I’m going to fire you and use another LLM,

 

Khurram Chhipa  13:48  

I find that very helpful. When you’re starting to go to another. LLM, it’s like you’re using perplexity. Oh, why don’t I go chat GPD? And, no, no, wait, wait, I think I got it. You gotta, you gotta pay them off of one another. I think that’s the trick

 

Justin Daniels  14:03  

for making a joke. It makes a really good point as to how people can work around the coding of llms and get it to do things it’s programmed that it shouldn’t do, which goes to crumbs. Other point about it, why it’s so important to have a human in the loop to really be able to evaluate critically the information that’s being provided, because that’s what people can

 

Justin Daniels  14:26  

I I’ve done it. So anyway,

 

Jodi Daniels  14:30  

now that we’ve established Don’t be nice,

 

Justin Daniels  14:34  

I know we’re going against the golden rule, but it’s kind of justified in this circumstances. Anyway, do um, why don’t we talk a little bit about your views about the AI arms race specific to cyber security, and what it means for blockchain companies that operate in such a high value, high visibility environment. And the other key thing that people don’t appreciate is when you have a cyber attack. On a blockchain, it happens so quickly, and there’s very few ways to stop the funds, or whatever it is going from the smart contract to somewhere else, which makes defense so important.

 

Khurram Chhipa  15:13  

Yeah, I think so. My personal viewpoint is that it’s exciting but also terrifying at the same time. But I think what you know, because these these bad actors, they are becoming more sophisticated in their approaches to cyber attacks, and we’ve seen a number happen over this past year alone. I think what it’s forced, you know, it’s forcing companies, like how important and other cyber security providers to really level up their game and evolve. And also, with respect to AI, I think over time, it’s gonna force us to provide more sophisticated technology. So yes, it’s scary that, you know, we have these threat actors that are becoming more sophisticated. But I also think it’s forcing positive change, you know, it’s gonna for I think, you know, things tend to be reactionary when it comes to this space, so I think it’s a good thing. And I think over time, you see the way we create these protections, the way we advance AI, it’s just gonna be become more sophisticated, and it’s gonna prepare companies like how important to better address these threats. So I think it’s forcing it is forcing innovation, and it’s forcing advancements in AI on the positive side, sometimes you need bad to advance the good.

 

Jodi Daniels  16:33  

Justin really wants to talk about deep fakes. I wouldn’t want to take away your favorite topic. I don’t know if this is you. Look at me. I’m being I’m, I am not being an LLM, I’m being really kind. You,don’t even know what to do with that.

 

Justin Daniels 16:53  

I’m speechless. I don’t know what to say anyway.

 

Khurram Chhipa  16:57  

Conversation is positive, so it’s happy, calm. It’s not a, you know,

 

Justin Daniels  17:03  

oh, give it five minutes when I make my next domestic transgression in any way, as you and I have talked about, and I’ve written a good bit about, this is the biggest thing right now that scares me from a cyber threat perspective, is the deep fake. And I wanted to ask you if you had any thoughts around how companies should be prepared for deep fakes of their CEOs, where the response time will be measured in hours and days, not weeks.

 

Khurram Chhipa  17:32  

Yeah, no, it’s a great question. I think, you know, before deep fakes became a thing, you know, most companies like Halborn, you know, we have disaster recovery and business continuity plans. And I think now, with this threat of deep fakes becoming more common, I think it’s going to personally involve updating those playbooks to allow for more rapid responses. I also think it’s going to require some element of custom training just specific to deep fakes, how to spot them, what security measures should be taken to ensuring that employees at these companies aren’t falling for these deep fakes. I also think it will require implementing more advanced security features and security measures just across the board, just to make sure that there are checks and balances in place when a deep fake threat presents itself.

 

Jodi Daniels  18:26  

Do you have any recommended check or balance that you might suggest?

 

Khurram Chhipa  18:32  

Yeah, I think so. One thing we do, we limit communications to a specific forum. What I’ve seen happen, what I what I’ve read about happening with deep fakes is that employees will receive a message through a different forum, different communication method. I know my my last role, some of the phishing spam message we would get would be over WhatsApp. But you know, our corporate policy was to never use WhatsApp for any communication. So right off the bat, we knew that something was off. There’s also an element of being able to I would also recommend trying to create some some consistent, independent verification method, you know, if you suspect there’s a deep fake, you know, using a specific form of communication, we also have our general secure security measures, where we flag things it if we become suspicious. So, and also, I think there’s an element of common sense that needs to be made. You know, if my boss is sending me a message asking me to wire $100,000 that should raise a red flag, right? So it’s just a matter of being vigilant, always staying on guard, and just trying to use common sense. But I think it starts to just, it starts you just create a secure framework where that’s repeatable, a process that’s easy to follow, that’s repeatable for all for all employees, and it just, it’ll. I think if you, if you create a robust process that’ll weed out these deep fakes and other threats that we face. I mean, it’s so common, it’s, you know, I think companies like how well, especially, you know, companies the crypto space, you see these messages and these attacks come in time and time again. I know. I think since I’ve joined the crypto space, I can’t I’ve lost comments to how many spam messages, spam called, like, on a day to day basis, excuse me, but yeah, I think it starts with just creating a robust process that’s repeatable, you know, that allows all employees to play a part and be vigilant.

 

Justin Daniels  20:39  

That’s very helpful. I guess. The thing I would add, or maybe get your thoughts on, are, is I’ve now been trying to, depending on who the client is, particularly if it’s publicly traded, to incorporate some type of deep fake into a tabletop exercise to give the non cyber folks a real feel for how this works, how easily it can go viral, and how quickly you have to respond, because whenever I do Incident Response Plans, I don’t want to draft a 50 page manual on how to respond to each little thing, because no matter what you come up with, whatever actually happens will not be on your manual, and no one’s going to read that when it actually happens. But the thing is, I think you do need some specific little playbook for this, because there’s so much time pressure involved in responding. Like, you need to know, like, what tools are you going to use to authenticate that it’s a deep fake to be evidence of your counter narrative as to why it’s fake. And you know who’s going to be involved in creating the crisis comm. It could be at two or four or five in the morning, I just that’s why I’m thinking it’s the one time I would depart from my I really don’t like to have a specific playbook, because no one’s gonna read it

 

Khurram Chhipa  21:49  

when it happens. Yeah, and I agree. And I think to that point, I think that’s where tabletop exercises become critical. I think, you know, you can you can prepare all you want. You can write the best policies and procedures, but until you’ve actually gone through it, you won’t really be able to spot in real time how to respond properly, where the threats are identified, who plays what role it’s also, I think, another thing to add, which is, you know, I think in these scenarios, having a RACI chart or some sort of document that shows who does what in these situations, is also important, because the worst thing that could happen is this, this occurs, and then everyone’s just gambling, trying to figure things out. Then the you know, I think preparation is critical in this case,

 

Jodi Daniels  22:34  

with everything that you know, we always ask, what would be your best personal cyber or privacy tip,

 

Khurram Chhipa  22:43  

I would say. And I think this comes from one of the Bitcoin mantras is, don’t trust but verify, I think. And I think that sums up, you know, this entire conversation where you know you want to have some human element, you want to be able to have some over human oversight over things. But I think you know, don’t just even you know, even get a message from somebody you know, somebody you trust. I think it’s important to have some sort of process where you can verify the authenticity of the message and verify that the message itself is authentic as well. So yes, don’t trust, verify,

 

Jodi Daniels  23:24  

and when you are not managing all things legally and cyber security, e and a, IE and looking up new words, what do you like to do for fun?

 

Khurram Chhipa  23:38  

Yeah, I love spending time with my family, I have two young boys, nine and four, so they keep me busy. Try to stay in shape. I try to exercise on a regular basis. I play basketball, and I spend time with my friends as well. I think, you know, having kids be being in such a high, high octane industry, it’s important to you know, have some downtime and enjoy the little things. So whatever I can do with my kids, whenever I can spend time with them and my wife as well, that’s the time I value the most amazing.

 

Jodi Daniels  24:12  

Well, thank you so much for joining us. If people would like to connect and learn more, where should they go?

 

Khurram Chhipa  24:17  

Yeah, I thank you for having me. You know, I’m available on LinkedIn. You can reach me there. I’m also can email me at my Halborn email address, my first name, khurram.chhipa@halborn.com, amazing.

 

Jodi Daniels  24:35  

Well, thank you again for joining us. We really appreciate it.

 

Khurram Chhipa  24:38  

Yeah, thank you for having me. This was fun. I appreciate it. I love what you guys do, and it was great. And Happy Halloween.

 

Jodi Daniels  24:44  

Happy Halloween, everyone. Of course, it will have happened afterwards, but that’s not the point. We hope you had a great Halloween to you.

 

Outro 24:51  

Thanks for listening to the She Said Privacy/He Said Security Podcast, if you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.