Mark Webber is the US Managing Partner of Fieldfisher, a London-based international law firm with offices in Europe, the US, and China. An English lawyer living in the Silicon Valley, Mark oversees the firm’s US operations. As a recognized leader in privacy law with extensive experience working with the world’s leading technology companies, Mark is known for finding innovative solutions to complex legal challenges. At Fieldfisher, Mark has been instrumental in establishing, nurturing, and expanding the firm’s presence, operations, and services in the US.
Here’s a glimpse of what you’ll learn:
- Mark Webber reflects on the evolution of his career
- The data privacy framework and its impact on businesses
- Mark expounds on the European Union AI Act
- Mark offers insight on intersecting AI regulation with GDPR
- Should organizations implement AI assessment frameworks?
- Mark offers privacy and security advice
In this episode…
Lawyers endorse the Data Privacy Framework as a valuable tool to mitigate cybersecurity risks. However, many experts argue that protecting businesses from other privacy risks — such as those posed by AI — is not enough.
The draft of the European Union AI Act has sparked debate among privacy professionals, with some advocating for a prohibition on the unrestricted use of AI technologies such as biometrics in real time. Mark Webber, a seasoned lawyer with expertise in technology and privacy, disagrees with this approach. He cautions against AI’s high-risk threats to transport, infrastructure, and decision-making. To mitigate these risks, Mark suggests that companies conduct an AI impact assessment, such as the one developed by the National Institute of Standards and Technology, before implementing generative AI systems. He also warns that, given the ever-evolving nature of AI, any governing policies will only be effective with proper education and training.
In this episode of the She Said Privacy/He Said Security Podcast, Mark Webber, US Managing Partner at Fieldfisher, joins Jodi and Justin Daniels to discuss the US-EU Data Privacy Framework and AI. Mark explains how the framework will impact businesses, the European Union AI Act, the intersection of AI regulation with GDPR, and why organizations should consider implementing AI assessment frameworks.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: firstname.lastname@example.org
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Mark Webber on LinkedIn
- Email Mark Webber: email@example.com
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
Welcome to the She Said Privacy/He Said Security podcast. Like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.
Jodi Daniels 0:22
Hi, Jodi Daniels here. I’m the Founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified information privacy professional, providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:35
Hello. Justin Daniels here. I am an Equity Partner at the law firm Baker Donelson, practicing technology law, and I am passionate about helping companies solve complex cyber and privacy challenges during the lifecycle of their business. I am the cyber quarterback helping clients design and implement cyber plans as well as helping them manage and recover from data breaches.
Jodi Daniels 0:58
And this episode is brought to you by Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media. In short, we use data privacy to transform the way companies do business. Together, we’re creating a future where there’s greater trust between companies and consumers. To learn more and to check out our new best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit redcloveradvisors.com. The new part I guess has to we’re gonna celebrate one year here actually soon. So, you’re right, I guess we’re gonna move into like old-ville as opposed to new. Yes,
Justin Daniels 1:47
The one-year anniversary of our book is coming up fast.
Jodi Daniels 1:50
I know that was really fast, but very, very fun and very exciting. And we are delighted today to have Mark Webber, who is the US Managing Partner responsible for overseeing the operations of Fieldfisher in the United States. He is an English lawyer located full time in Silicon Valley. And when businesses capitalize on technology, intellectual property and data to drive revenue, they encounter the very issues in which Mark excels. A recognized leading privacy lawyer with a wealth of experience working alongside the world’s leading tech companies. He consistently finds innovative solutions to complex legal challenges. As a leader of Fieldfisher, he has been instrumental in establishing nurturing and expanding field fishers presence, operations and services in the United States. Well, Mark, we’re so excited to have you today on the show.
Mark Webber 2:40
Thanks. Thanks. Great to be here. Nice. Take the invite. So thank you, Jodi.
Jodi Daniels 2:44
We’re totally covering our coasts, you in California and us hanging out in Atlanta today.
Mark Webber 2:51
People don’t always realize him in California either. So, good, good to see you.
Justin Daniels 2:57
So, Mark, how did your career evolve to your current role?
Mark Webber 3:05
Well, Justin, I think, right time, right place is probably the way I would summarize a huge amount of my career. I started as a lawyer in 1997, the day after Princess Diana died, which was kind of a big, a big a big moment in time, but at a time where businesses were just starting to hurtle towards dotcom and I ended up in, in a business in the UK, which was, you know, aligned with a lot of businesses from dotcom, particularly with one partner who’d spent a lot of time in Silicon Valley. So during the first few years of my very junior career, two things happened. There’s a lot of technology and a lot of technology investment. And I got involved in a lot of telecoms work and a lot of early technology business. Businesses like Netscape and an AOL and Yahoo! who were clients that junior old me he was being thrown in things to do with in London. And then also something else happened. Somebody threw me a copy of the UK Data Protection Act in 1998. And said, Hey, we need a briefer on this, and that was before blogs on webpages. But I got in. Yeah, I got into that. So I careered very fast into tech. And that was a really, really great thing, particularly in dot-com learning on things that basically disappeared or never happened, bidding for 3G licenses, putting satellites into space to create telephone networks, which, you know, Elon Musk is trying to recreate right now. There’s a huge, huge amount of tech and then in 2001, as the dot-com was going off the boil the firm that I was with had opened a Silicon Valley office and basically sent me to Silicon Valley. So as a two year qualified lawyer, I arrived in Silicon Valley and I spent the next three and a half, nearly four years, wandering around doing enterprise software deals and technology licensing.
Jodi Daniels 5:08
Mark, we have come a long way from Netscape, which brings back Oh, my gosh, just so many memories, I can still picture the little icon, everything. Netscape thing. Yeah. So a long way from that two challenges of cross border data transfer. What are your first impressions of the data privacy framework that we have today? You know, being able to transfer data from the EU and to the US and what are you think are its impacts going forward?
Mark Webber 5:42
Okay, my first impression, I’ve always been excited about the Data Privacy Framework because it’s a model that we need. Model clauses are ridiculous, and attaching assessments to those clauses is even more so. And, frankly, the only winner in all of that process is lawyers. And the only pain point is, those putting those together, I don’t believe that much of that is actually protecting people’s privacy. The framework gives this freedom to receive data and freedom to rely on certifications, and then the assurances of this, the Biden administration. And just as a second route of transfer, transferring data and a plan B, there’s, you know, there’s a lot of cynical press out there. There’s a lot of suspicion about how long it will last. But I think it’s something that we’ve just got to put our faith into. And for now, at least, it gives us a really useful tool. I think the most useful part about the data privacy framework is they haven’t made businesses start again. So a surprising number of businesses stayed in Privacy Shield, getting off for over 3000, the majority of the US tech businesses that we worked with, stayed in the Privacy Shield partly to be honest, they were counseled by Fieldfisher to do so because they still demonstrated that you were doing something sensible that data, it’s still provided recourse mechanisms that weren’t other otherwise there for data subjects. And we’ve always thought it would give this sort of boost to businesses if and when a new framework came into place. And that’s exactly what’s happened because of this grandfathering in if you were in the shield, and you’d remain compliant, you’re now part of this new framework. So we’ve got businesses running around trying to get ready for the new notices and make some amendments. But it gives a real power and, you know, starting point for others. My first impressions were always the excitement as a lawyer, and you can’t really get that excited about data transfers, but it does ease some pain for for our, for our clients, the majority of the businesses, I’m working with a vendors selling their technology to Europe and transferring and receiving a lot of data. But I’ve been really interested in the last month or so about two things. One, how many businesses that haven’t been party to the shield or left the shield now want to get back in. So there’s a lot of new certifications in play. Not too easy to deal with the DACS they’re so busy right now. But you know, there’s a there’s a new momentum there. But then most heartening, I think, really relevant to the adoption and success of the framework is customers or keep asking their, their vendor clients, their or their vendor vendors. When you’re getting into the DPF, are you going to be using it? When can we start relying on it because they like the freedom that it gives them? They like the fact that it’s another mechanism and alternative mechanism. So we’re beginning to see that and that’s not UK customers, that central European customers as well. So we are all in a pretty, pretty good thing. And and something I really I’m excited about it doesn’t solve everything because it’s only EU-US. There are of course, the Swiss and the UK versions on their way to some of that just makes it a logistical nightmare, because you’ve got to refer to every, every different mechanism and it’s not clean. In that sense, it would be far better if we could have something like this on a more pervasive, global scale. We’re beginning to see some hints at that kind of development. But um, good news story really, Jodi. So…
Jodi Daniels 9:30
indeed, I’m excited to hear the guidance that you gave clients, we did the same and you know, they were like, Do I really have to keep paying this every year? And I said yes, you do. And so when it finally released, they were very, very excited, very grateful. See all of that what we talked about it, it actually came to fruition. And I’m also glad that you mentioned that customers are asking vendors and companies should I move forward because some of the I guess scuttle that I have heard as some companies are in a wait and see approach. They’re not sure that they want to do this because they just think it’s going to fall down. And they’ve said, No, we’re not going to move forward. I lean towards No, I think you should, I think you should continue this. And if you’re a new company and haven’t done it yet, this is probably a course in direction you’d want to go through. I don’t know if you have anything in addition to add, so maybe those companies on the fence?
Mark Webber 10:23
All right, well, there are some on the right, you say because it’s an administrative burden to jump through. But um, the model clauses could fail, you know, Schrems II took the model clauses to court and the Privacy Shield failed. That wasn’t even technically the question that was being asked. But the idea of having a plan B is a big one, since Schrems and this kind of gives that and removing the TIA. And removing some of that assessment, I think is really good. Now, definitely worth being a little cautious. I think I’m, you know, internal privacy counsel should be advising their businesses as it might, this might not last forever, there will definitely be some that customers that don’t like it, I actually have quite a strong view here is that you know, some in the market are beginning to present it as a choice, you can use the DPF or you can use SCCs. For strictly if you follow the EDPBs guidance, you should go to an adequacy mechanism first. So I don’t think there is a choice if the DPF is available, you should be relying on that and having other mechanisms and other appropriate safeguards as a fallback. But now we can have some open debate there. But as I say, all in, it gives us more to work with. And it, I think the one thing that, you know, I, I’ve got a lot of admiration for people who stand up for privacy rights. And that’s important and you know, Schrems bought some battles that were perhaps necessary and definitely necessary for him at that time. But if you look back to 2014, 2015, and then 2020, and now where we are now, there’s been a lot of additional activity around transfers, which, as consumed, businesses definitely consumed some that I work with, if you could take the effort that’s been put into transfers, and put it into other parts of privacy programs, I think everyone will be better protected. So in many ways, I actually think that, you know, while it’s fair to fire and give everyone something to do and suddenly to do, again, a lot of that work, it’s just been a papering exercise, it hasn’t protected individuals privacy. And we’re all worse off because of some of that effort. And you can criticize that you can ask questions about, you know, and argue that data transfers are one of the most important things on the planet. But yeah, it’s the law, of course. But my word if we could have spent some money on some other things like privacy by design, we’d all be better off right now. So that will be my fairly strong view. And I really hope we don’t see a free moment where we’re all running around one more time refreshing blogs, changing contracts and calming people down.
Jodi Daniels 13:22
So we would agree.
Justin Daniels 13:26
So changing the topic just a little bit, we’re now going to move on to our topic of artificial intelligence. And seems that the EU is the number one innovator in regulation. And so, like privacy, the EU is out front with the EU AI Act and would love to get your thoughts around what you think the current EU AI draft does well, and where do you think they missed the mark? Yeah, so
Mark Webber 13:58
I mean, we talked about the I like the fact you could talk about the current draft, because all the reporting out there implies it’s a done deal. And it isn’t, it’s still heading towards trialog, which is where commission, Parliament and council all do their thing behind closed doors. So we really don’t know what it’s going to say. But the objectives are clear. But as usual, Europe’s got some conflicting objectives is desperate to facilitate the free use of AI, but at the same time is desperate to protect individuals. And you those two things don’t necessarily align. So. You know, my view now is, it’s not as big as you might believe when you read all of the blogs and the commentary out there. Because it probably impacts around 10% of AI today. This notion of hitting and prohibiting certain AI and others You know, the social profiling and the use of law enforcement, biometrics in real time banning that. I mean, that doesn’t impact many, many businesses doing AI or machine learning right now, then you’ve got high risk, and the sort of things that impact transport and impact infrastructure and impacts key decision making, there’ll be a lot of burden on putting that on the market. And we’re already seeing some that we’re working with saying, Well, if the EU is going to make us do that, and we do have to do those conformity assessments, and we do have to monitor all of that maybe that’s not a priority market for us. So I think actually, the EU is going to starve itself for some AI opportunity because some businesses won’t hit there. And then you’ve got everything else as they would, they would see it in the, in the AI like this, where there’s limited impact, and those final two categories. And that’s really where the majority of AI and machine learning is happening today. So there are some new transparency obligations. There’s references out to complying with other law and of course, the GDPR, which is already there, and overlapping, but, you know, I don’t think it’s going to shift the dial, particularly. And the big problem is it’s too late. They’re agreeing it now maybe they all say they’re going to agree it by the end of the year, in our internal sweepstake I’m actually the naysayer, which there’s probably not happening this year, then they will go out for elections as parliamentary elections next June. So, you know, there’ll be campaigning, the new regime will have to come in, and we’ll have to see where they head. So one, one alternative, and you know, there’ll be people who will say it’ll be done in it, maybe it will be done by the end of the year. But let’s say it is done by the end of the year. And I still don’t quite agree, whether it’s 18 months or 24 months, there was even one draft which had 36 months from it coming into force, would it actually be applicable. So just think of how much AI we’ve done since November 22, with generative AI popping on the scene. And just think of the fact that generative AI was just the fact that you know, just an event which individuals felt they were impacted. We’ve been working on AI and machine learning for six to eight years, there’s so much taking place, imagine what’s going to happen in the next three years before that law even comes onto the books. And yes, it will influence and some will plan for it. But I think it’s missed the mark. Partly because your think wants to become a world leader in the way they felt they were with GDPR in the way they could steer, steer more of the globe, and no doubt GDPR has an impact on many businesses, I don’t think I’d be sitting in California holding down a job if it wasn’t for GDPR in the last few years. But it’s it’s one of those things. If I had my way, I’d been looking to get more principles out through regulators and more, more focus on what good looks like and trying to steer things rather than putting my hopes on something which is, at best two years, two years down the road. And then if you look at GDPR, usually only just being enforced maybe five years after that, that we’re seeing real enforcement and real guidance. So it’s a seven year vacuum in the worst world since the pessimist and post Brexit. I’m not a part of that I was not a fan of Brexit, I did not vote Brexit. But the UK is going to go in a different direction. So we’ve also got a patchwork. And all three of us are involved with international clients, what’s the worst thing that can happen? You’ve got different laws in different places and different things they’ve got to think about. So all that means is compromise. So it’s a it’s a rough road ahead. But there’s a lot of lawyering to do along the way.
Jodi Daniels 18:51
What are your thoughts? I’d love to dive a little bit deeper on that intersection between this regulation, whether that draft makes it through or just, you know, AI regulation in general, that we might anticipate with GDPR?
Mark Webber 19:06
Yeah, okay. So, this is the beginning of a privacy adjacent legislation. And we’ve seen it with the DSA, the Digital Services Act, the DMA, only very large companies, but an increasing sort of plethora of things which start heading us between the eyes. And the GDPR has surprisingly few references and surprisingly few considerations in the AI Act. But of course, the two are gonna have to sit alongside each other. Of course, there’s some AI that takes place without any use of personal data. And when does the GDPR applies in all circumstances that personal data is processed. S,o the GPR may not apply in some circumstances, but as we know as vast troves of data sucked up and put files from around the globe, some of that is going to include personal data. So we’re going to be thinking about our obligations under GDPR, as well as our obligations under AI Act. And there are some things which are really closely aligned. And actually, I think a good news story is because the GDPR is already forcing things like taking a good data inventory, working out what data is going in, you know, Jodi, this sort of thing that you get involved in, you can’t do good compliance these days without understanding what they’re on what inputs are there. And of course, we’re all doing that with high risk data processing, we have to do a DPIA. It’s already good practice to do an AI impact assessment and clients are working and doing that. But what does that involve? What data was going in? What am I doing with it? What kind of rights are affected? And what’s the impact on individuals? What mitigations can I put in place to mitigate against those, those risks, that kind of assessment is good, there’s an alignment there, which is, which, which I think is helpful. There’s some explainability, which I think probably goes too deep, and is going to consume a lot of people and not necessarily help the average individual because nobody reads any of this stuff. Anyway, I think it’s important for a business to be able to explain something, but I’m not as sure how important it is to publish all of that, and to explain it to that full audience. So we got some debate there. But then when we get into that interplay, in other ways, you’ve got concepts, which are difficult to align. And during the use phase of AI, perhaps users and the AI Act are going to be controllers under the GDPR. That sort of makes sense. But developers in the development phase could well be providers under the AI act, but processes under the GDPR. And suddenly, as processes were really acting on instructions in relation to the the use and the processing of data, but the AI act is suddenly going to put other duties on a provider of AI but other transparency requirements, I think there is going to be conflict there because suddenly a processor is going to be asked to think about the how and the why and what are you doing. And under GPL, we put our hands up and say we’re a process, we’re just being told to do this. And there’s a real knock on there, which might force processes towards being controllers. So and then we’ve got some adjustments and a lot to think about. I mean, we’re thinking about that, of course, I have been burned many times looking at legislation that’s not in the legislation. So I come back to the fact right at the beginning, Justin, you said the draft, it’s not the law intent. It’s the law. And we think we know where it’s going. But we don’t necessarily know it’s going there. And I can’t tell you how many times I’ve downloaded a new E privacy reformed draft and had a look at what that might be saying. And, you know, pre GDPR, or we were looking at that, and it’s still not agreed. So there’s a level two, which, on one side, helping with policy, helping clients influence some of that and engaging on what the law might be. We’re doing a lot of that. I’m not losing lots of sleep on what exactly what the law says until we know what the law says. And you know, if you work with Silicon Valley companies, as I know, you do this telling somebody, there might be some law, and it could come in, in, you know, 24 to 36 months, don’t get a lot of attention. I think you’ve got to bring it back to principles. And there are other things that you can be doing around AI that are not necessarily thinking too deeply about everything under the AI Act. Not to say it won’t be you know, there won’t be things to do as and when it comes out.
Justin Daniels 23:55
So speaking of frameworks that maybe are not yet law, are there any AI risk management frameworks that you are advising clients look at to approach AI risk management comprehensively? From my perspective, I’ve been having clients take a look at the NIST AI risk management framework that came out in January. I haven’t found a better approach than that, but we’d love to hear what your thoughts are.
Mark Webber 24:24
I do like NIST. If you’re a mature organization, it takes a while to get your head around some of that, and definitely some are using that. And if you’re a mature organization that’s using things like nest privacy and security, those things start to fit in. Right but NIST has gaps, particularly when it comes to rights, particularly when it comes to some of the transparency and some of the other European overlays. So I think, generally speaking, it’s quite a shift to get a fast moving organization to use nest or you least use it comprehensively, because they’re not gonna put the brakes on for three months while they get their head around it. Preparing for that, and bringing that in, in, in a lot in a larger organization, you know, I can see that happening, I have spent quite a lot of time myself going off to Singapore, and the Singapore regulator because they put out a lot of good basic information about AI and a lot of what’s good. And they have a, they have a framework, they have a set of tools, a really good set of questions to ask. And I find that really useful, in part because I got familiar with it early. And then going into some of the suggestions that we have from ICO and the UK regulator on English and Welsh lawyer. I know ICO quite well, they’ve got quite a good risk assessment framework themselves. You know, it can be repurposed and built upon. And I think what we’re beginning to find is, we’re not working with one framework, but we’re using a couple of frameworks to build a framework that works for that organization. And the ICO spreadsheet has been downloaded and used and repurposed quite a few times. It’s very privacy centric, so it doesn’t do some of the other things around confidentiality and IP and, and looking a little bit wider afield. And I think this is one of the big challenges for privacy lawyers that we could probably talk about, it’s not only privacy. So I think that’s why having a tech background, yeah, becomes really helpful for AI. But yeah, lots of people running very fast means lots of corners being cut right now.
Jodi Daniels 26:41
I was going to emphasize that those growing companies and technology who are moving really quickly do need to really make sure that these assessments are included, and I appreciate you sharing some of those suggestions, perhaps those are a little bit more nimble, that they, you know, can take and start at least somewhere, which is better than than nothing and cutting those corners.
Mark Webber 27:04
Yeah, yeah. And, you know, the race to put it on the market to show you’re a leader in AI to integrate ai ai into at all. There’s an awful lot of AI being done, and particularly repurchased and resold, which is relatively easy, because it leaves in some of the learning and some of the thinking to others, but I’m not sure those others are always doing all of that thinking. And I really do come back to if you can’t talk about, you know, what, what inputs went in how they’ve been used. You know, I think, talking to a lot of engineers, I think very rarely will we really understand what the AI is doing, I think you have to accept some back box, but input output, and then risk is really key. And all of those frameworks, and all of those concepts are good. But we’re going to have with AI a bigger problem than we have with privacy. And that is there are a lot of very good privacy teams who understand their privacy, have got good programs, thought levels of compliance, but it sits there in this little bubble within an organization. And then everything else goes on. And you know, some privacy champions and some seller privacy programs. But there are very few organizations that have pushed compliance out through their business, and have people really thinking about privacy in a in a meaningful way. When we think about AI, you’re actually just talking about a whole wider group of interested parties. And as privacy lawyers, we have a part in that AI program, I’m not sure that we’re always the leader. And I’m definitely sure that we have to work with a number of other stakeholders. The challenge to get all of those stakeholders together is one thing in governance terms. The bigger challenge with AI is getting your organization to understand the AI and understand what the consequences of use of that AI are. Because things like human oversight, things like responsible use are all key. And I can see that just in my own law firm, ChatGPT launched we had various warnings. They turned it off for a while, of course and you know, while you know, while everyone got their head around it. And then we start thinking about policy but policies not we need you need education because there are so many tools, there’s so many ways of using those tools. And you’ve actually got to touch against on the individuals who are using it and using those outputs and as a massive challenge. And I don’t think anyone’s really on top of that yet the education and bringing the masses with us on AI is going to be a huge, huge thing and those souls that are going to be the most successful in delivering successful AI projects in my view.
It’s a really important point the education is very similar in so many different topics. Policy only works if the people understand what it means and how to relate it to their job? Yeah, with that in mind, we always ask everyone, and you could decide if this is gonna be a personal one or business one. But what is the best privacy or security tip that you would offer?
I would say see the bigger picture. Because privacy, individuals and teams can be quite introverted. I know privacy teams who are feuding with their security colleagues, to the point that they lock down things that they’re doing, they don’t share, they fight over budget. But actually, you need to enable, you know, privacy across many people and across many stakeholders, this kind of goes back to my point on AI, I think, you know, you have to you people can get consumed on one law and one issue. And yes, the laws, the law, you have to comply with it, but you quickly lose sight of the much bigger practical picture. And you can Obstet you can be obsessed about a lawful basis for an hour. But if you’ve touched 20 people in your organization to think about privacy or the data they’re inputting in that hour in another way, far more valuable to privacy, it’s far more valuable to the individuals. So I think not being as focused and micro and think in a much more macro way would be an awful lot of what we spend our time doing is just sitting back and going okay. Yeah, whether we’re resource constrained, time constrained, or something else, you know, what are we going to do that’s going to move the dial, as opposed to how can we noodle down in one particular area, solve one problem, but just create 1000 more massive challenge?
Justin Daniels 31:52
Well, we want to end by asking you when you’re not knee deep in privacy and AI and Schrems II. Maybe we’ll have a Schrems III one day. Maybe not.
Jodi Daniels 32:05
It’s not to say we did.
Justin Daniels 32:07
What do you like to do for fun out in California?
Mark Webber 32:10
Yeah, well out in California is the big thing. So although I’m a I’m techie environment machine, the best thing in my life is to get into the outdoors. Whether that’s mountain biking, whether it’s running up a hill and backcountry camping, but most of all, I’m a I’m a ski freak. So I you will find me 25-30 days a year and with Californian ski resorts and backcountry skiing, I’m always planning. I live for the next ski trip, whether it’s skiing Mount Lassen or Manchester, which I’m planning on working on a backcountry ski trip in Iceland next March. Maybe Japan can’t make my mind up. But that’s the thing that gets me out of bed. I’ll work all day if I know I’ve got a week skiing ahead of me or a day skiing ahead of me. So that’ll be my secret.
Jodi Daniels 33:00
And those listening, you need to see Justin’s massive smile. Mark, you hit his happy zone of outdoor mountain biking and skiing. That is Justin’s happy place. So
Mark Webber 33:12
Hey, Justin, well, you can fail for me because last Sunday, I hit the trail having first with my tubeless tires on my suspension for an hour, I went 250 meters and snapped my chain. So that was the add I was I was up in up in the hills and didn’t have enough links to mend the thing. So my biking ended before it had started. Hopefully, Labor Day weekend, I’ll be back up and out there.
Jodi Daniels 33:38
We wish you much more success. And we’re so grateful that you came to share all of your immense knowledge and thoughts and perspectives. If people would like to keep up with what you are working on and stay connected, where should they go?
Mark Webber 33:54
Well, there is LinkedIn. And I put out quite a lot of LinkedIn. And I always, you know, interact and enjoy doing that. So I definitely say that and also say IPP events PSR, I’m going to be there. In fact, I’m actually training the AI GP course that they’re just launching. So I’ll be in now and I know fewer in those those sessions already, in fact, there are now two of those sessions. So you know, out out definitely out and about, but I love people to reach out and ask questions, and LinkedIn or my email, always the best way to go do that.
Jodi Daniels 34:29
Wonderful. Well, Mark, thank you again for joining us. We’re really grateful, and we appreciate your time. Perfect.
Mark Webber 34:35
Thanks very much. Appreciate it.
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.