Ian Riopel is the CEO and Co-founder of Root, applying agentic AI to fix vulnerabilities instantly. A US Army veteran and former Counterintelligence Agent, he’s held roles at Cisco, CloudLock, and Rapid7. Ian brings military-grade security expertise to software supply chains.
John Amaral is the CTO and Co-founder of Root. Previously, he scaled Cisco Cloud Security to $500M in revenue and led CloudLock to a $300M acquisition. With five exits behind him, John specializes in building cybersecurity startups with strong technical vision.
Here’s a glimpse of what you’ll learn:
- Ian Riopel and John Amaral’s career journeys building cybersecurity startups and why they founded Root
- Why automated cybersecurity should focus on basically eliminating vulnerabilities rather than triaging
- How Root uses AI agents to patch vulnerabilities
- How Root helps reduce software and software supply chain security risks
- Ways AI-driven patching improves security team efficiency and reduces operational bottlenecks
- John and Ian’s personal privacy and security tips
In this episode…
Patching software vulnerabilities remains one of the biggest security challenges for many organizations. Security teams are often stretched thin as they try to keep up with vulnerabilities that can quickly be exploited. Open-source components and containerized deployments add even more complexity, especially when updates risk breaking production systems. As compliance requirements tighten and the volume of vulnerabilities grows, how can businesses eliminate software security risks without sacrificing productivity?
Companies like Root are transforming how organizations approach software vulnerability remediation by applying agentic AI to streamline their approach. Rather than relying on engineers to triage and prioritize thousands of issues, Root’s AI-driven platform scans container images, applies safe patches where available, and generates custom patches for outdated components that lack official fixes. Root’s AI automation resolves approximately 95% or more vulnerabilities without breaking production systems, allowing organizations to meet compliance requirements while developers stay focused on building and delivering software.
In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Ian Riopel and John Amaral, Co-founders of Root, about how AI streamlines software vulnerability detection. Together, they explain how Root’s agentic AI platform uses specialized agents to automate patching while maintaining software stability. John and Ian also discuss how regulations and compliance pressures are driving the need for faster remediation, and how Root differs from threat detection solutions. They also explain how AI can reduce security workloads without replacing human expertise.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: info@redcloveradvisors.com
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Ian Riopel on LinkedIn
- John Amaral on LinkedIn
- Root
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
Intro 0:00
Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.
Jodi Daniels 0:21
Hi. Jodi Daniels here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:36
Hi. I’m Justin Daniels here, I am a shareholder and corporate M&A tech transaction lawyer at the law firm Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.
Jodi Daniels 0:58
And this episode is brought to you by wimpy Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business. Together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. So you wanna know what’s interesting? I have the notes and our Zoom little box flip flops today, and it is throwing me for a massive loop. If anyone wants to challenge themselves, just move the screens of how you normally do things, and you will be challenged. This is very tricky.
Justin Daniels 1:50
You have to learn flexibility. Well, you know what given our topic today, I think it’s good, because sometimes when we encounter an unfamiliar kind of security risk, we need to be flexible in our approach and not be so wedded to way we’ve done things.
Jodi Daniels 2:07
There we go. All right, well, so then, then we should, like, actually talk about what we’re gonna talk about today. So we have two people. We have some co founders. We have Ian Riopel who is the CEO and co founder of Root, applying a genetic agent. I can’t see, I can’t speak, because they’re just flip flopped, no agentic AI to fix vulnerabilities instantly, a US Army veteran and former counter intelligence agent. He’s held roles at Cisco, CloudLock and rapid seven, and we have John Amaral, who is the CTO and also co founder of Root. Previously, he scaled Cisco cloud security to $500 million in revenue and led cloud lock to a $300 million acquisition with five exits behind him. John specializes in building cybersecurity startups with strong technical vision. Welcome to the show, John and Ian.
John Amaral 3:02
Thank you. Justin and Jodi. Appreciate being here. Hello. Thank you.
Jodi Daniels 3:05
Stop laughing at me. You can’t laugh at me. I admit it, I made a mistake. You should be so proud.
Justin Daniels 3:11
Yes, that’s about, I think in the last what 15 years I’ve known you, that might be about the 13th time only.
Jodi Daniels 3:18
Well, you know, at home’s a little bit different than at work, fair enough.
Justin Daniels 3:24
All right. Well, could each one of you share with us a little bit about your career journey to where you’re at now?
Ian Riopel 3:35
Sure. Yeah, so I think you did a fantastic job of can teeing us up. So thank you. Yeah, so John and I, as you probably saw, there’s a decent amount of overlap. We’ve been on a few different companies with each other and startups and, you know, we created Root to take a very different approach and how the industry and how everyone should be thinking about solving vulnerabilities and the vulnerability remediation problem, and so kind of leading up throughout our careers, my background working in military intelligence and then a series of cybersecurity startups, John and I kind of mind melded here, and over the last couple of years, have kind of gotten rude off the ground. So I think we’ll dig into that here in a minute.
John Amaral 4:26
But the same with my background. I think you had most of it. What propelled us to try to tackle this problem is just, I guess, historical discomfort and unhappiness with how the voter vulnerability remediation goes in general. I mean, it’s find a lot of things and fix a few and and I just wanted to invert that problem and make it like, fix all the things and deal with a few. And when that’s what we’ve tried to do here, everything in my career has led up to that.
Jodi Daniels 4:56
We talk a lot about automation and security. Yeah. And there’s kind of the question, I feel right, what is truly effective automated security actually look like? And are we close to achieving it?
Ian Riopel 5:12
Yeah, so I think it’s it’s interesting, because one of the compelling things that kind of got rude off the ground was John and I were walking around at Black Hat couple years ago, something like that, and where we’re looking at all the vendors, and everyone’s talking about software supply chain security, and there’s very early on mentions of AI and this idea of more effective triage. You know, certainly we’re hearing more and more more about this concept of reachability analysis, but the whole market still is starting from this presumptive position where you’re going to start with this massive list of work that needs to be done and a tremendous amount of technical debt. And this whole thought process was, well, what filters can I apply to help get that to an output of work that is achievable by my organization, and then that way I might actually be able to fix and solve those problems? And a lot of the automation was really focused around that filtering and triaging aspect of things. And the way we’ve thought about it is we said, well, what if we just kind of fixed everything, and then whatever happens to be left, because there’s something that’s net new or novel, that’s where you should really be focusing your time and spending less time on all the triage, because The triage isn’t actually fixing anything, and so that’s where Root kind of came out of and where we that was kind of the first principles that we started with. And when we think about automation, we’re not thinking about that aspect of triage. We’re thinking about, how do we basically completely eliminate the tech debt that has to do with patching which gets rid of a huge amount of vulnerabilities, certainly in the context of your software supply chain.
Justin Daniels 7:09
So Ian and John, can I add this maybe framing this up a little different for our audience who may or may not be familiar with agentic AI. So is it fair to say that, for example, one of the chief causes of cyber risk is companies don’t do their patching because they don’t have enough resources, and it gets put down the list, and threat actors like, hey, they didn’t patch their software, we can just go and exploit an existing vulnerability, and it sounds like in a real world example, what Root can do is help with the use of artificial intelligence to say, okay, how can maybe we automate some of these functions, like patching, so that the team doesn’t have to think about that as much. And then the AI helps figure out, okay, where do we best implement our human resources, and how can we automate things like patching, which, you know, it’s just downloading whatever the latest software is. Is that? Is that kind of a real world example of how Root could really help a customer with a cyber problem about how to really deploy their assets, that’s almost exactly perfect.
John Amaral 7:09
I’ll grab this quickly, just because I want to talk about the AI angle. So please, when you’re using third party software, like open source, and not the code you write, but the code you pull into your you know your software, and that can be manifested as container images. That’s like the common way software gets deployed to production today, it’s through container images. When you scan those images, you get myriad vulnerabilities there, and that just is a list of work to do. We’ve applied AI agents in a way that allows us to patch any of those vulnerabilities away, and it comes in two forms. One is, we look at and understand what kinds of patches are available. Say, from just the ecosystem, right? You’re working on Debian Linux, there’s some patches available. Should I use them or not? It’s a big question, because you can break stuff. Our agents understand which of those are best to use, and tries to apply the ones that are most, most reliable. Second. Oftentimes you run into situations where there is no patch for that vulnerability in the version of software you’re using. It might exist in a newer version, but I don’t want to switch to that newer version, because it’s going to break my software. It’s got all new features in it, and I don’t know, I haven’t really tested with it. So our agents can generate net new patches for versions of software that don’t have any. And they do it in a few different ways, but one common way is they look to future versions. Say, 2.0 and I’m on one dot, 3.1. I look in 2.0 I see what they’ve done to fix the vulnerability there, and we do something called backboarding it. So it’s not just about upgrades. It. About finding ways to patch, to solve the problem of the vulnerability through a combination of either upgrades or these Root generated patches. And that’s what our fleet of agents does behind the scenes. So you just show us your container image through our platform. We look at it, we analyze it, we decide a patching strategy for that, we upgrade it to whatever patches are available and we believe will work, and we do a lot of testing and analysis to do that. And then for all the rest, we make patches for you, and then they’re there, and you get all your vulnerabilities gone. And so that eliminates, like 95% of the kind of issues that people care about, maybe even more. We have customers that go way further than that, and then engineers now are focused on the remaining bit. There may be two left that we have to think about because, well, it’s impractical to patch it, or there’s some constraint you have to deal with. But their job now is much more focused on, on what’s real in their world, from a perspective of what, what hasn’t been taken away, and that changes the workload and the whole specter of how you deal with vulnerabilities.
Ian Riopel 7:37
I think I just add one thing, which is to emphasize it seems very simple to say, just upgrade, right? Just do your patching. The reality is, a lot of organizations don’t do that. It’s becoming much more of a consideration now that there’s been significant maturing from a compliance standpoint. This is something that PCI, DSS for now requires. The CRA inside Europe is now requiring compliance around s bombs and meeting the SLAs that you’re laying out more and more, it’s becoming a topic of conversation as folks SOC two audits and these upgrades that that John’s talking about are not trivial. You know, for most folks, because as you start to make these changes, it’s not uncommon for these changes to require what we call a breaking change, right? So if I make that upgrade, it breaks my software. And now all of a sudden, that’s a significant engineering effort and friction point. And so that’s the reason why a lot of companies get stuck in this legacy, state or old software, because they don’t want to break anything production. And unfortunately, compliance didn’t necessarily, the vulnerability didn’t necessarily out prioritize that next feature or the stability. So what’s changing here is the speed and pace and the capabilities that we’re able to bring to market now, whereas SLAs used to take or for good organization would follow, say, a 3060, 90, patching a critical vulnerability within 30 days, maybe taking upwards of three months plus time. Those time points can now be compressed into hours with and delivered very easily or even passively effectively, from a developer standpoint, because of some of the big innovation in the automation and the agentic AI that we’ve built.
Justin Daniels 13:11
So kind of asking another question, because I was literally talking to one of my MSSP clients about using AI and software tools, so, for example, with intrusion detection. So think about taking a vast volume of logs and being able to run that through a process to identify, hey, there’s some unusual traffic when you connect these disparate dots that suggest there could be a threat actor in this network that’s doing its recon before it engages in its mayhem. Can you talk a little bit about is this another use case that your tool addresses? Because one of the challenges that I’m seeing as someone who vets tools for clients as well as uses the tools is, many times the tool is great, but you kind of have to come up with the use cases.
John Amaral 13:55
Our tool is, is, is not built for that use case. I can tell you that, in general, my experience of building with these kinds of AI tools for a long time now suggests that it could be good at that, but it’s not what we directed our application towards. The key use cases around our solution are really about software security and software and software supply chain security lifecycle. You build your software, it’s vulnerable because you incorporate code that’s vulnerable. Ours acts on that code and acts on patching that code in a way that creates a better posture. And for instance, if you’re trying to sell into the federal government, you’re subject to FedRAMP compliance. And FedRAMP compliance requires that you continuously monitor and patch all of the software for the kind of risks that I just mentioned, which are I’ve baked in security vulnerabilities into my code. They’ve got to go away. So, as Ian mentioned, they have strict timelines on how fast you have. Have to remove those vulnerabilities, or else you can’t operate a federal service. They just won’t let you sell into the government, and there are other similarly strict compliance regulations around FinTech and around healthcare and around retail that require that you have no critical or high vulnerabilities in your code. This is sort of, you know, really setting. I’d say the risk surface of your software differently. The use case you mentioned is, hey, from a vast sea of log data, can I detect, perhaps a lateral movement potential for an attacker in my realm? The good thing about AI, agentic AI, especially with larger and larger context windows, I think, you know, Google has upward of like a 1 million or 4 million token context window. Now you can put all that context in and in can find deals in haystacks, very, very well. So I think there are companies actually that are applying it to that sort of use case, but ours is more on the software supply chain security side.
Ian Riopel 16:03
I think so. I think the, I think one thing I would mention right, is there’s different ways to, as John said, there’s different AI is a very broad term, right? And I think that’s honestly one of the things that we find ourselves when we you know, we were just at a Gartner conference last week, and everyone says they do AI, right? It’s the new zero trust in the security world. Everyone does it. When you dig a little bit deeper, it’s it can it’s a it can be a very wide it’s kind of, it’s like saying you’re a doctor, right? Like you can be a generalist, or you can be a hyper specialist in one specific area. The way that we leverage AI tends to be more the specialist area, because of how critical this is to it’s the foundation of what everyone’s building on, right? Literally, it’s the foundation of the house, of your business or your or what your service you’re delivering. And so the better way to what we’ve built internally is less of more. Think of it less as like an LLM style generative AI, and more of it as we’ve built an agentic fleet, we actually internally call it a platoon. We use like a military kind of structure, and we have a number of different agents that work together and coordinate to come up with strategies and approaches to find and build and create and deliver sophisticated patches that are trustworthy with full provenance and full transparency and how that’s being delivered and behind the scenes, it’s extraordinarily complex, but the experience from a user perspective is it’s not unusual for us to hear. It’s like magic. It just works. But that’s not to say we don’t use, we use LLMs internally. We use all the LLMs, right? But the product itself is not just a reinterpretation of the LLM these, these agents are quite sophisticated. We even have tool belts that we’ve built, sophisticated tools for that they can do a whole different number of things.
Justin Daniels 18:01
So you’ve mentioned your tools, real world use case which is really related to software vulnerability, specifically patching. And we talked a little bit about agentic AI. And a lot of people are worried out there that, hey, AI is either gonna replace my job. Talk to us a little bit about what are some of the limitations where you really still need that skilled cyber professional to basically use it as a tool to enhance his or her abilities, not simply, as you pointed out, just substitute for their judgment.
Ian Riopel 18:36
Yeah, so I’ll let John double click here in a second. But what I would just say is we’re not replacing people’s jobs, per se, because a lot of this isn’t happening already, right? Like people have jobs and companies are still struggling, already to hit these SLS and are struggling to operate at we’re not seeing the number of supply chain and patching and vault, you know, vulnerabilities that get owned or pwned that had a patch available for it. We’re not seeing that go down right. It’s still going up exponentially. So this is more about enabling teams and companies to be able to focus their time where they already are right, which is trying to be as effective as possible around feature, delivery, stability, going to market with a better product, while trying to stay compliant, this is more about, rather than have being in a very reactive state all the time when there’s just a new zero Day Kev exploit that CISA puts out and says Everyone must drop everything and fix this, and staying ahead of that, and letting letting a vendor like us and our solution like us actually solve those problems for you proactively, which is really where very rarely do we come across an organization. Organization that says this is a matter of fact, we’ve never come across an organization that says this is what we want to do really well, is patching, right? It’s, it’s never a thing.
John Amaral 20:11
So I like nautical analogies. And so in sailing, a boat is really hard, right? You got a crew, they need to, you know, do a lot of work, especially on a big sailboat, to just get it to go to the place they want it to go. That’s hard, right? You got, you know, all kinds of specialists on there, navigators, folks who know how to sail, folks who know how to, you know, run all the mechanical systems, etc. These security issues are effectively like running that sailboat with like, a hole in the hull, right and and what it does is it redirects everybody’s attention to the hole, because that becomes the biggest problem when sailing the boat is still hard and you still need to get where you’re going. So I would say that in general, people have been ignoring the leak in the hull and assuming that, you know, they can just get away with that. We’re giving tools that don’t allow them to not have leaks in the hull effectively, they’ve got vulnerabilities, they’ve got issues. We can take those away and all of the awful work that goes along with dealing with that kind of problem, so they can just run the boat and securing in, you know, kind of sign that? Back to software, you know, you’ve heard of the word shift left, right. Like shift left’s a term in kind of engineering now it’s like, which, which in this in this world, means the security people find priorities and they push them over to the left, to the engineers and say, Please fix those for us. That’s effectively the hole in the hull. The engineers don’t like to fix it. What they want to do is sail and sail fast. These are performance sailors. They want to build the real solution. They want to build top-line value. They want to build software. And when you pull in time away from them to fix these vulnerabilities, it’s hard. They don’t like doing it. It’s specialists, where I could take security specialty, so we’re basically deep burdening that. And our mantra is, don’t shift anything left that you can automate away because, because that, especially in this context, means taking the engineers off of building the software that powers our business, getting us to the place we want with the sailboat. So we’re a welcome tool in organizations that use us, because they get to focus on the stuff they really believe is worthy of their time. And Z and mentioned, a lot of these companies have stacks of vulnerabilities that are kind of like the I wish I could fix them. Place thousands of these, and we make that list go to like three, and then they’re happy about it so they can keep on sailing fast.
Justin Daniels 22:44
You have a follow on question. You look like you did, and I’ve done most of the talking, so I know, and I think I need to, but it’s an AI topic, and it’s always interesting. I’m deferring to my co-host.
Jodi Daniels 23:01
Well, I was, I was going to ask John and Ian our question that we always ask, which is, what is your best personal privacy or security tips? So not one that you would offer to a company, but you’re hanging out with non privacy and security people, and they know what you do. What might you tell them that they should do?
John Amaral 23:21
I can go first. Yeah, go for it. John, personal privacy and security tip is password manager. Password Manager, Password Manager. Be diligent, get that thing, but all of the stuff in there make a really long, hard, you know, one password and diligently, like, apply it everywhere, so that you never have this kind of, like, simple passwords written down past anything like that. Just make it all digital. Get a company you trust, make sure that’s backed up. Take those little keys they give you all those special keys that happen when you forget what you’re doing, like, store those off somewhere really private and safe, but, but, but really manage passwords well, because we know that’s like, a big problem, forever and and always will be enough for me. It isn’t, though I’m diligent with my passwords.
Ian Riopel 24:06
I was going to say the same exact thing with a plus of two FA on everything. There’s a lot of great two FA tools out there. Now, if you don’t want to use any specific one in particular, but there’s some that are out there that are a lot easier to continue to use if you need to change phones, for instance. So there’s a lot of different capabilities. It’s worth taking, taking an afternoon and or, you know, an evening or whatever, for your family, and coming up with a solution and strategy and getting it rolled out for everyone. It’s pretty inexpensive to free, and it’s definitely a lot cheaper than spending days trying to yell at people and get money returned.
John Amaral 24:51
Oh, and he and I have not loaded TikTok on our phones.
Ian Riopel 24:55
Yeah, we’re only where you’d like to.
Jodi Daniels 24:58
But I want to give a sentence more on the why?
Justin Daniels 25:06
No Tiktok spying by China didn’t ask?
John Amaral 25:08
Yeah, well, that’s the answer. That’s the answer.
Ian Riopel 25:13
There’s a reason why Tiktok is not allowed on most executive and or government devices, right? And when you think about the type of capabilities that exist, you know the amount of data that our phones can collect at any given time. It doesn’t even have to be your engagement with the app directly the types of information that can be collected about you. So I occasionally get made fun of by some of my friends. I’m a little paranoid in this realm. I literally have it blocked on my firewall. But what I would say is, every time someone walks into my house that’s on my guest Wi-Fi, I can see an alarming number of outbound activity going from their device, even when they’re trying to reach back to those servers, even though they’re not on Tiktok. I’ll just put it that way.
Justin Daniels 26:04
Well, you know what, Ian, you bring up a really interesting point, which is social uneasiness around saying, Hey, I don’t want those apps and how people react to it, because that creates a barrier to people saying, I don’t want to do that because they want to conform. They want to be liked by other people. So you may not care, but I know from talking to other people, they felt bad by telling people, well, I don’t use it for these reasons, because people think they’re being overly paranoid, and it really creates this interesting social dynamic that impacts your security.
John Amaral 26:39
Great observation.
Ian Riopel 26:40
For sure, I would say I’m not one of those people that’s like anti social media. It’s just where I know that there’s a ton of information out there on all of us. We all know that, right? It’s a question of to the extent you can control some of that data and where it goes.
Different people have different policies. And you know, this is tick tocks. When I’m not particularly comfortable with having all that information about my family or myself.
John Amaral 27:10
Use wisely what you use.
Jodi Daniels 27:13
Now, when you are not building Root and talking all things security, what do you like to do for fun?
Ian Riopel 27:22
Uh, it’s usually so obviously, I’d love to spend time with the family whenever we can. Startup life is particularly hard, but we try to make the best of all of that. But John and I are definitely workaholics. We both enjoy — I don’t want to speak on what he enjoys doing, but we both find ourselves any spare time, we tend to be out on our boats. We’re both John mentioned earlier. He loves, you know, nautical theme. We may or may not end up still messaging each other security related stuff or work related stuff while we’re out on the boat, but the change of scene is nice. Starlink.
John Amaral 28:00
Starlink is a great enhancement to your building and work life balance, which come together all the time. I have family, kids, two puppies. I like to vote and I have, I’ve been a guitar player for since I was about seven years old. So professionally play, you know, mostly recreational now, like playing bars and bands kind of thing. But you know, it’s not for money. It’s for fun.
Jodi Daniels 28:23
Very exciting. Now, if people would like to learn more and connect, where should they go?
Ian Riopel 28:30
Yeah, so Root.io is our website, and if anyone wanted to try out the application or the SaaS service, or it’s un-gated. Today, you can just go to app.Root.io, or obviously, feel free to reach out to us on LinkedIn. John and I are both super available there. We’d love to engage.
Jodi Daniels 28:52
Awesome. Well, thank you so much for joining. We really appreciate it. Thank you so much.
Outro 29:01
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.
Privacy doesn’t have to be complicated.
As privacy experts passionate about trust, we help you define your goals and achieve them. We consider every factor of privacy that impacts your business so you can focus on what you do best.