Click for Full Transcript

Intro 0:01  

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

 

Jodi Daniels  0:21  

Hi, Jodi Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies. 

 

Justin Daniels  0:36  

Hello, I am Justin Daniels, I am a shareholder and corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.

 

Jodi Daniels  1:00  

And this episode is brought to you by Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. Well today we are recording and it is the beginning of Cybersecurity Awareness Month. It is. You don’t seem quite as excited about Cybersecurity Awareness Month.

 

Justin Daniels  1:46  

I guess I didn’t have as much coffee as you did today.

 

Jodi Daniels  1:48  

You don’t drink any coffee. Maybe you should go start well, today we have — what did you call this episode?

 

Justin Daniels  1:57  

The Four Horsemen.

 

Jodi Daniels  2:00  

We have the Four Horsemen. It’s a little bit different. We have two guests. We have Jennifer Miller, who is Grammarly General Counsel. She focuses on enabling Grammarly to grow and innovate while carefully managing business risk. Her responsibilities include everything from navigating AI and regulation to scaling the company’s managed business. And we also have Suha Can, who is Grammarly CISO and VP of engineering, leading security, privacy compliance and identity for the company globally. He’s dedicated to securing the data of Grammarly’s over 30 million users and 70,000 teams at enterprises and organizations worldwide, and we are so excited that you are both here to join us. And I shared a little pre show. My daughter is a huge fan and big user, so thank you, because that is probably one of the only ways she is making it through school. Welcome so much to the show. 

 

Jennifer Miller  3:02  

Thank you so much. 

 

Suha Can  3:03  

Oh, thanks a lot.

 

Jodi Daniels  3:06  

You are strange today. What if I tell you now I don’t know. All right, it’s your turn.

 

Justin Daniels  3:12  

Well, I guess we’ll have to figure we’ve this is the first time in a while we’ve had two guests. So I guess what we’ll do is we’ll let each one of you take turns to tell us a little bit about your career journeys to get you to this point. So we’ll start with general counsel’s first

 

Jennifer Miller  3:28  

Okay, well, first of all, thank you again for having us join you today. I am General Counsel at Grammarly. I’ve been here about 10 months now, and my career journey started — actually, even before law school, I started my career in politics. I worked for the former governor of Maryland for a number of years, and then decided, Oh, I think I’ll go to law school. All the interesting policy work is being done by attorneys. And so I went to law school in Washington, DC, and then started my career in internet and privacy and trademark and copyright law, and never looked back at government. And now just volunteer instead of working there. And I started my career at a big law firm in Washington, DC, and then went to a small media and telecom boutique also in Washington, DC, and then moved to California about, oh gosh, 18 years ago, and that’s when I took my first in house role at Cisco Systems. And I’ve been in house now for for since then, and I’ve had a number of really interesting roles, including General councils at a couple of different really disruptive and innovative technology companies, including one that had a fleet of stratospheric internet balloons, which was called loon part of the alphabet, moonshot family. And then I also worked for a company that had autonomous delivery robots on college campuses and navigated through really sophisticated AI and machine learning. And was just so excited to be introduced to Grammarly, which Jodi, like you said, my kids are similar — everyone in my family loves Grammarly. So I was thrilled to join a company that was. So innovative in the AI space, and that is focused on communications, because I know that communications makes all of us more valuable in terms of getting our jobs done every day.

 

Jodi Daniels  5:13  

Suha, that’s your turn now.

 

Suha Can  5:15  

Thanks. Thanks for having us again. Yeah. So a little bit about myself. I always had an intense interest for security, and I was involved with security at an early age, and at some point, you know, I started to make it a carrier. So I moved to join Microsoft about 1718, years ago. I guess when Jennifer was going to her first in-house. That’s exactly around the time when I joined Microsoft as a security engineer and started working on one day we did research. So that was the team at Microsoft called Microsoft Security Response Center that really was all about responding to the most major incidents in the company, and I had a lot of fun working with, frankly, a lot of extremely high caliber security engineers in the wild and vacuum world of exploits. How can we harden the operating system to withstand the exploits? Kind of assume a breach, know that we always wonder with this, how can we build some defenses into our products by default? So I spent quite a bit of time there. I also spent several years at Amazon. Is the director of payments security. Amazon had like this, really, really high skill data centers where they were still processing all of payments, and they decided to move to AWS. And I did, my team essentially rebuilt the entire payment infrastructure of Amazon in AWS, which, which is probably one of the most sensitive systems at Amazon here. So it was a pretty big deal for us to be able to do that. After about 20 years between these two companies. I joined Grammarly as its first CISO a little over two years ago. And even loving it, I love working at a smaller company that is working in a highly innovative place at a high pace, and doing something that’s good, that helps people communicate better, which is a very near and dear to my heart type of problem. And I love being here.

 

Jodi Daniels  7:26  

I love how the mission is so important in the daily roles that you both have.

 

Jennifer Miller  7:32  

Indeed, yeah, we take that very seriously, our mission of improving lives by improving communications, and also our eager values which we live by, in terms of being empathetic and gritty. It makes it a pleasure to work here.

 

Justin Daniels  7:45  

So can you share how you came to creating your privacy and security program, values and philosophy which center much around trust? Sure,

 

Suha Can  7:56  

I’ll take a step. So I think this, this is the very first problem that I had when I first joined the company, and here’s a company that cares truly a lot about privacy and user privacy, and how do we take that to the next level and scale it? And I think it starts with a philosophy. And you can’t just sit and come up with a philosophy and videos all by yourself, you need to really understand the position of the company in the in the grand scheme of things, both the external landscape and what is going on there, but also internally, how the company thinks things through innovation and bringing in video to customers. And it starts with who Grammarly is. Grammarly is a communication assistant that’s leveraging AI. So communication is a deeply personal thing, and it obviously is a type of product that really, really will help the customer in a lot of different ways, being aware of maybe the most personal things, and mostly, you know, important things to that customer. So when your company liked it, you need to have a very high bar on user trust and privacy. You need to put the user in the center of your philosophy. So the philosophy came first from an understanding of what an important and critical emission, critical thing privacy is to this product and to this company. And then with that, we arrived at it from we really care about security to transparency and control. So we came up with this high level strategy for the company. Our approach to user trust is to provide transparency about how we are using data and to provide control and user agency. And from there, once you have that strategy, you need alignment. So the second piece of this, okay, now, now that we dream up a vision and where we want to be and how we want our product to be, I need to bring the rest of the company along. So we started having discussions with the senior leadership team and explain to them, here’s what we mean by transfers and control, and here’s what it means. Means day to day as we build products, and here, here are the things that we need to do. So it’s really those two things, understanding what type of product you are and how important is privacy and user trust for you, very, very important, and then coming up with a strategy, and really in the company leadership around to get that high level strategic support. So that was our journey. And I think this really describes the first two, three months in the job where I was asking myself these questions a little over two years ago.

 

Justin Daniels  10:34  

So I have a follow-up question, because it’s very unusual for us to have a general counsel and a CISO on a call. So I want to follow up and ask you both this question. And so I’m working on a negotiation, and the issue that’s come up is it’s all around limitation of liability on a deal where we’re talking about, you know, software. And one of the ways that we want to address this is say, hey, we’ll take on more liability if you have more segregated baskets, each company will have a different basket, a segregated domain for their stuff. And what I want to understand is talk to me in just that context, how has the relationship between the General Counsel and CISO evolved? Because talking about technical security solutions that could impact limitation and liability, really requires the ability to translate technical stuff in a way that maybe lawyers who aren’t as technical as the CISO can understand and then translate that into explaining to the executives, hey, what’s the risk on this deal?

 

Jennifer Miller  11:40  

I guess I can start with that one. Yes, please. This is actually one of the reasons that I really wanted to come to Grammarly. Because when I was even, even when I started interviewing and I was meeting with the executive team, I asked questions about how the legal team is perceived, how and whether they have a seat at the table. How they engage across the company, and I have not been disappointed since I got here. And so Suha and I meet. I mean, it’s as simple as we meet regularly. We actually talk quite regularly, both scheduled and unscheduled. And then Suha and the folks on my team that focus on privacy work together all the time. Now, I also have a commercial legal team, and that gets into your question about, How do we think about contracts and cross functionally? I’m very fortunate to have an extraordinarily collaborative legal team that works between silos, if you will, between their commercial team and our product and privacy team. And we’re constantly thinking about, how do we put ourselves in the shoes of our customers so that we can proactively and in advance address those things that we know that our enterprise customers are going to care about. And so as I questioned, what are the processes and controls already in place, when I was thinking about joining Grammarly, I felt like this was a place that really understood that cross collaboration between the trust team and the legal team, and it has been so fruitful in the 10 months we’ve been here, and we just expect it to continue this way, because we’ve had such success to date, and we really do, like I said, put ourselves in the shoes of our customers and think ahead so to Suha, as he was talking about before we’re SOC 2 compliant ISO verified these sorts of things matter to our enterprise customers, and because of that, we we’ve thought ahead and we’ve already checked those boxes, which makes it a whole lot easier to work with our enterprise customers who are thinking about these things every day.

 

Jodi Daniels  13:41  

I am so excited to hear that you think about your customers first, I emphasize this, day in and day out, in what you just described, especially in the B2B side, where these customers care and they are looking for what you and the company are doing is just so important. And one of the ways I think Grammarly does this so well is by being transparent right at the very beginning, externally, you have created an amazing public facing page. I’ve used it as an example when trying to tell others you need something like this. It might not have to be quite as fancy as theirs. They’ve done a really good job. What? What was the path to getting this type of transparent page up? And for those who are not familiar, I encourage you go find their privacy, security and AI page, and it has a button that can explain, here’s our privacy practices, here’s our security practices, here’s how we believe in AI, that you’ve done all of it, and I’d love to be able to help others on this path. So if you can share, how did it come to be, and it’s not just a static page, you actually maintain it. Because the product continues to evolve.

 

Jennifer Miller  15:06  

Suha, do you want to take this or would you like me to start? 

 

Suha Can  15:09  

Jennifer, why don’t you start this one? 

 

Jennifer Miller  15:11  

Okay, well it was really interesting, because when I got here, of course, my whole team, we all looked around, what do we need to do? Suha and I started talking from the earliest days about, how do we enable our products to continue to evolve? And so, Jodi, exactly to your point, we made the decision to put together these technical specifications on a separate document that’s linked to our privacy policy so that we could continue to evolve our products and innovate as rapidly as we possibly could without always having to update the privacy policy. We wanted that technical specifications page to be clear. We looked around at best practices, what other companies were doing, and we said, you know, this is the right time for us to do this. We want to match these best practices. In some cases. We want to be the leader. We want to show other companies in the AI space, that this is doable and that they should put themselves in the place of the consumers and be really transparent. And of course, we had lots of discussions, is this the right thing to do? Is it the right time to do it? But the entire team here, from top to bottom, came to understand that this was really the best way to be the most transparent and to continue to push ourselves to innovate as quickly as possible and develop new products. Because I can imagine a world if we did not have that page where we have to update our privacy policy every few months, that would just get burdensome. We want the privacy policy to be the overarching umbrella philosophy of our company, and then we want to be able to talk really clearly about the different products that we make. And we have so much ambition for developing all these really interesting features and functions in the future that we felt this was the best way to do it. We were so pleased to get such a positive response to the change.

 

Suha Can  17:01  

And if I can add one thing you know is Jennifer is telling this, this story. One thing really just strikes me, that’s one of the things I like most about working in this company, which is you don’t often hear the legal team talk like this. You often hear you don’t have the legal team. That’s true, championing transparency, and I think this, this is one of the many reasons I really think we are able to hear the impact and lead the company in the direction that we would like to lead the company in. It’s because of this mutual understanding and sharing of the values and the type of, you know, AI provider that we would like to be. So just, just even here, I’m noticing how unusual it has just reflected on some of my past life, and what I hear from, from my peers in the industry.

 

Jennifer Miller  17:55  

That’s true, often legal teams are bemoaned, is the right word as the black hole, the Department of no and we actually strive to be the Department of Business enablement here. And I’m so proud of the team that we’ve built, because we actually exemplify that. I think every day.

 

Jodi Daniels  18:13  

Can you imagine if the legal department was renamed the Department of Business Enablement? That’d be awesome. That would be so fascinating, that would be a fun trial to change the names a little bit like —

 

Jennifer Miller  18:27  

Well, it’s certainly part. It goes to our legal team mission, separate and apart from gram release mission, but our legal team mission to really be a strategic partner that helps the business solve its problems, instead of just saying no all the time. And I’ve been at so many companies where that Department of No means that people don’t even talk to the legal team. They’re afraid to bring questions and concerns, and we’re not that way.

 

Justin Daniels  18:54  

Interesting. For the last hour, I’ve been the legal prompt engineer with what I’ve been but I want to follow up on something. That you talked about, because in the time we’ve been interviewing, the idea of trust, transparency, and customers have been very much at the forefront. And Jodi and I both do a lot of advising on AI, and we see companies that are all over the place. And so I was wondering — it sounds to me like and I’d love to hear both of your perspectives of the customer, trust and transparency has really infused how you think about AI. Because since we started in January, the EUA Act has come to the forefront. Now, Colorado has an act. Utah, California just passed something. So how are you I guess you do. You have these principles that are infusing what you’re doing so that you’re proactive about when the regulations do hit, you’re already ahead of the game because you’re acting in a way that’s really consistent with what these regulators are trying to do. I’d love to get both of your perspectives on that.

 

Suha Can  19:55  

Yeah. Uh, sure. Why don’t I start and maybe in. Jennifer, can add on the regulation side, and I think that’s a very interesting element of your question as well. But on the infusing part, how do you infuse the element of trust in your programs? It really is, definitely what sets apart a successful and effective trust team, I think from one day, they merely, they merely check the fix the boxes like, like they say. And I think it has multiple elements for me, and I have done this in a number of companies now, it really starts with having a very strong team and having a strong team. This is just that it is not just like technically, world class, but also understands and is able to instill a culture of security, and this first piece of infusion creates a strong trust in Israel, most companies fail. This is not an easy thing to do. First you have to be able to hire and bring people into your company. And I think here, typically, what I found work best, is there’s a group of people who I’ve been working with over the years, and they follow Me, and some of some of the leaders under me from company to company. And that’s kind of like a compound interest applied to talent, and because you all are always more experienced, and on day one, you know each other, and trust is already there. So you don’t have to go through this, this figuring out who the person is and what they like and what they don’t like, and my weaknesses, your weaknesses, we already know this. So you start, you start from where you have already spend a lot of time in now that’s the strong team is, is, is one thing, but then that team has to set security expectations for the company, and that is also something that can really go wrong. And we have policies, processes, all of that can be viewed as security expectations, but you have to be clear in for example, someone has to launch an EC2 instance. They need to know how to launch a security EC2 instance, and you need to make it really easy for them to do it. Otherwise, they just won’t do it. They will just work around you. So this is where a really successful security team is one that is not merely an advisor and tells people they take, you know, follow these guidance, but it is actually a builder, and it builds the tools and infrastructure for all the company to be able to easily meet these expectations that they etc. So it’s a builder. That’s how I look at the security profession. And similar to the discussion we just did about the Department of No, it is not a no advisor, consultant, and kind of you know, High High Castle is how you should be building software. It’s some set of individuals that are builders and that are able to make it easy for all teams to meet the security expectations. Now, I think that’s the first element. Then you also have to make sure that, like generally, you don’t want security to be distinct, that security people. You want it to be everybody’s job and responsibility. And I think the way you do it is really about integrating these tools into product development right from the start, but also when it comes to reporting and understanding of success, creating that shared ownership with the various departments. An example of this is in engineering, there’s a dashboard, and Grammarly, engineers can go see the product bugs that customers have requested in the same dashboard, you integrate the security issue that they need to be taken care of. So that it’s kind of like part of the main highway of development in the company, rather than this thing you have to do and to kind of like, please the security guards. And that’s also very, very important. So this, this builder writing the trenches with you. Great dive deep. Build solutions to make it easy for people to sit and meet the expectations is what I think has been working. And final, final thing is, security has a never done aspect to it. So you also always have to look at what’s changing in the external landscape. What are the type of vulnerabilities and regulations that are kind of like just appearing on the horizon, and how do you continuously improve your program to be able to meet that demand? And that’s, that’s the unknown part in security, which, which makes it super exciting for me. So it’s not like it’s a feature, a capability you build and you forget it’s never done, and you always have to be learning and that I think keeps it very interesting and exciting for us to spend our lives on.

 

Jodi Daniels  24:40  

How do you — I’d love to talk a little bit about some of the products, and so to take that concept of vetting and reviewing, if we were to think about the AI portions of the product, what did that review process look like? It seems like there’s an incredible sense of collaboration. I’m a great new product engineer. I have a. A brilliant new plan. What happens? What does that look like?

 

Suha Can  25:05  

Yeah, of course, I think maybe I can continue on this. So firstly, there’s a concept of experimentation that we have. We are living in a highly innovative space, and not everything has to be viewed right from the first day it falls in someone’s brain, is an idea. So like you have to, you have to give agency to your internal customers, which are your developers, to be able to experiment and understand in a safe way a particular feature that they want to build. So you need to make it easy for people and for their developers to be able to experiment with things once they make a decision to ship it in a product, a number of things happen. First of all, as I mentioned, our builders have deployed a number of responsible AI tools that a developer uses when they write code that enables them to quickly assess whether their feature is going to lead to bias issues. Is going to lead to human safety issues or misalignment with human radius, type of issues. So we have here built these into our deployment pipelines, and these tools are available for them to be able to self-serve and do this, testing themselves models, checking on other models. So, we build models that are able to look at other models and identify whether they will exhibit some, some of these issues, safety issues that we do not want to go in front of our customers. We also have humans, and we have human analysts in our responsible AI team that are doing great, teaming so banding and twisting the product and the near future in under development to their will to be able to see whether they will misbehave. And so this, this red teaming, is also another tactic that we employ, and typically, these are the type of things we do. We heavily focus on automation and scaling, but we also leverage our expertise in AI to be able to test our AI solutions, and also the human creativity and offensive mindset that the red team brings to the table, enables us to uncover issues that I think it’s going to take AI a great many years to replicate that level of creativity as well. So that’s kind of how it looks like from a product insurance perspective, at primary and this is coupled with a legal review that that was handed and done once both of these things give us enough confidence we are comfortable shipping the feature to our customers. Of course, keeping in mind agency, transference and control principle. Obviously, these processes enable us to not just root out egregious issues, but also make sure that features that, for example, leverage certain customer data, is going to also light up all the transference and control framework that we have to ship every feature with.

 

Jennifer Miller  28:04  

Oh, may I jump in? 

 

Justin Daniels  28:06  

You’re the guest, please. 

 

Jennifer Miller  28:08  

Cool. I was going to double click on two things that Suha said. Well, first, I wanted to give a shout out. We have a fabulous, responsible AI team who actually thinks about these, these matters all the time, and they work on all of our product launches, and they operate with their responsible AI principles. And then I wanted to put in two plugs, first for our responsible AI blog, which we launched, I think, about two or two or three months ago now, and you can find that on LinkedIn or on the Grammarly homepage. And we talk about these things a lot because we think that we have a useful voice, and we think we can help guide companies by being an industry leader in these spaces. And then I wanted to just touch back and make one more plug, if I may, before we move on. You had asked about Justin. You had asked about regulation and how we stay ahead of that. And I wanted to just touch on the fact that on the legal team we are constantly monitoring what’s happening everywhere. I noticed this weekend, of course, that Governor Newsom vetoed the AI bill here in California, sent it back to be rewritten, so that will be very, very interesting. But I also wanted to put in a second plug, which is that we have an open role now for a Senior Policy Council to help us really delve into these questions and get more involved on the policy stage, because we think we have a lot to add and that we can really be an industry leader here. So we’re really excited to bring on that role here at Grammarly.

 

Justin Daniels  29:32  

Well, speaking of contracts, I wanted to ask Jen for this specifically, which is when I now encounter vendors with AI contracts, we’ll get into issues such as making sure they’re not learning on your data issues around what’s an acceptable level of hallucination. So I even had one where I said, Hey, if they de-aggregate our anonymized data and it becomes pi again, and you have a data breach. And. We’re going to have to have a really big indemnity for that that’s outside the normal limitation of liability. Jennifer, I was hoping you’d give us some insight as to how your team and you have evolved with the kind of contract clauses you’re going to see when it comes to AI with your customers and then vendors who may want to integrate with your product.

 

Jennifer Miller  30:21  

Sure. Well, the one thing I want to call out is we’ve actually made it really, really easy for all of our customers, whether they’re paid or unpaid, enterprise or not, to opt out of having their text train. You know, use it to opt out of having their text used to train our AI. And so a lot of companies don’t do that, but we took the step of offering that to everybody, because we really, like we said before, we really feel that all of this control should be in our customers hands. But to your point about contractual clauses, we constantly — any good legal team, not just groundwater, any good legal team should be talking to others in the industry. Should be seeing what’s coming, thinking about how they can facilitate getting contracts closed as quickly as possible, but with the right level of risk. And because of that, we constantly are looking at, where are the trends going, what are other companies doing? What are other companies offering? And then we think about, Okay, should we be doing the same? And where can we also be the leaders? Where can we make our contracts easier? Where do we think the risk is more tolerable and so that we can help our customers get Grammarly quicker? Because we know that we have 30 million people that use Grammarly every day, over 70,000 teams trust Grammarly, and so it is in our best interest to get contracts done as quickly as possible and as smartly as possible. They always have to have the right level of risk for the company. So we’re constantly in good company. Should I be doing that? Should be thinking about reviewing their contracts, looking to see where they get stuck, over and over again, and then thinking about, are there ways to mitigate that and make them more tolerable for companies customers, but with the right risk tolerance?

 

Jodi Daniels  32:12  

We’ve talked a lot about trust, transparency, Product Review, and I imagine there are some people listening and saying, Gosh, I wish we were like this. So for a company who’s not quite there, but the listener, the privacy pro, the security pro, the legal pro, what might be a tip, a piece of advice that you would offer them, that they could take back to try and help turn the tide in their organization?

 

Suha Can  32:47  

Yeah, I guess I can probably start. So, yeah, I think, I think first, first you are the leader. You have to be very knowledgeable and current on what’s going on out there. Like, there’s definitely a lot of pipe around there right now, and there’s a lot of debate and discussion. And you know, first thing you need to provide clear to your company, and you need to really think deep about, just like earlier, how we discussed earlier, what is the right stance for this company, and what is the what type of product are we and what level of privacy and trust investment is that a good for that type of product? And if you are a product that is, that is developing AI technology, you’re at the cutting edge, and you need to, you need to be prepared to to change your stance. And you know the trend is that, you know, typically, while there’s the hype, at some point, this becomes an established foundation for for software, and when it gets to that state, you have to have security and privacy underlying it, or or you simply will not, will not succeed with your business. So just like we have the cloud, and for adoption of the cloud, which was the last big transformation, security and privacy and trust is paramount. And I look at all the big providers of cloud technology out there, they clearly have huge investments and security and trust, I fully predict AI becoming that foundation to rise up to that table so you can get going now or you can get going later. But it is not a choice and also, you know it is the right thing to do, and you have to, I guess, a personal conviction on that. And if you, if you don’t, I don’t know how to help you with it, but I think you can imitate, or you can, you can, you can probably adopt it. But that’s how I believe. So it. It is here to stay, and this trust conversation around AI is only going to intensify.

 

Jennifer Miller  35:06  

And to add to that, I spoke on a panel not that long ago about exactly this question, and we talked about how AI is here to stay. It’s not actually as scary as many people think it is, and if you really find good advisors on the legal side. In particular, lawyers tend to have reputations for being way too conservative, and we were having this conversation about the fact that outside counsel often tend to be super conservative because they just, that’s just the way they are, and they don’t they don’t know enough about AI to feel comfortable to give advice about it. So they just say, Don’t do it, don’t use it. And we actually think that that’s the wrong approach. We think AI is here to stay. And we have found awesome advisors who have the right risk tolerance, but they base that risk tolerance in data and in metrics and in really understanding the tools. Because, like Suha said, We think AI and the use of LLMs is the next revolution in technology, and it is here to stay. So my two cents are, make sure you have the right team in place with the right risk tolerance level, and make sure you have the right advisors who actually understand the technology and who can then guide you in the appropriate way.

 

Jodi Daniels  36:22  

It is good advice. Thank you.

 

Justin Daniels  36:27  

So what is the best personal, privacy and security tip you would share with your friends at a party? So we’d like each of you to give us what you think?

 

Suha Can  36:39  

Okay, I must admit, outside the occasional CISO dinner type of setup, I don’t go to parties. I’m not a party person. But if I suddenly wake up and have unconscious and see that I am in a party, and a bunch of people are asking, Why do I I will probably truly try to hide right now that I work in security, it’s kind of, I understand the doctors, I guess, who go to parties and they get asked about the health questions. Because you do get a lot of, you do get a lot of questions that that are just like, you know, not the thing you want to talk about in a setting back, but to answer your question, I guess, the thing I would really stress, and that’s that’s definitely top of mind, is, is that everybody should be very conversant with AI. I should follow this technology very closely, maybe typical to maybe how 30 years ago, we all were suddenly becoming more than excellent experts, I think, being very conversant and AI and leveraging it for work, and understanding what works for you and what’s your like. What are you? The power user of AI is going to really accelerate and help them achieve things that they were not able to achieve before. So I think I will say, don’t fear it, adapt it, but understand how it works and be familiar with the concepts of it, is probably what I will say. And yeah, probably.

 

Jennifer Miller  38:20  

I would add, as much as people don’t want to do this, they should read the terms that these tools are operating under. I know that people just think it is boring legalese, and I get that sometimes I understand it, but I think it’s really important to understand what these companies are doing with your data and the controls that that customers have, and those are the tools that people should gravitate towards, ones that are clear and transparent and really put put the control in the hands of the customer. So you’re only going to know that by actually reading those long scrolls of the terms and the privacy policy that show up on your phone or your desktop. So I tell people, you got to read the terms.

 

Justin Daniels  39:01  

Or maybe what they’ll do is just create a PDF and then stick it in the AI and start with the summary.

 

Jennifer Miller  39:05  

Please summarize it for me.

 

Justin Daniels  39:09  

Start there if you wanted.

 

Jodi Daniels  39:12  

Well, when you are not advising on privacy and security, what do you like to do for fun?

 

Jennifer Miller  39:20  

Oh, I can go first. I love to travel. So my family and I, we’ve traveled to, I think I’ve been to 35 countries in the world, and we love to travel. But then when I’m not traveling, I love to hike. I find that being outside in nature makes me so much more creative. And actually, I find myself coming up with solutions to problems that I wasn’t, that wasn’t actually actively thinking about when I’m out in nature. I love it.

 

Suha Can  39:46  

I have two young children. We have two young children right now. So I think fun is obviously spending time with them. But what I do find puts me in the zone is going for. On, and I live in the Pacific Northwest. I live near Seattle, and we have really pretty forests and trails to run in, and I do that quite often. That’s how I find my moment of calm, I guess.

 

Jodi Daniels  40:16  

Well, we are so grateful that you shared all of your insights and approach with us today, we’ll be sure to link to the responsible blog post and the transparent page as well in the show notes. If people would like to connect with you, where should

 

Jennifer Miller 40:35  

they go? Oh, LinkedIn. LinkedIn would be terrific. Perfect. Likewise.

 

Jodi Daniels  40:41  

Well again. Thank you so very much.

 

Jennifer Miller  40:44  

Thank you for having us. Was great.

 

Suha Can  40:46  

Great. Jodi, thank you.

 

Outro  40:53  

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.

Privacy doesn’t have to be complicated.