Brett Ewing is the Founder and CEO of AXE.AI, a cutting-edge cybersecurity SaaS start-up, and the Chief Information Security Officer at 3DCloud. He has built a career in offensive cybersecurity, focusing on driving exponential improvement. Brett progressed from a Junior Penetration Tester to Chief Operating Officer at Strong Crypto, a provider of cybersecurity solutions.
He brings over 15 years of experience in information technology, with the past six years focused on penetration testing, incident response, advanced persistent threat simulation, and business development. He holds degrees in secure systems administration and cybersecurity, and is currently completing a Masters in cybersecurity with a focus area in AI/ML security at the SANS Technology Institute. Brett also holds more than a dozen certifications in IT, coding, and security from the SANS Institute, CompTIA, AWS, and other industry vendors.
Here’s a glimpse of what you’ll learn:
- Brett Ewing shares his career journey in offensive cybersecurity from penetration tester to founding AXE.AI
- Ways AXE.AI automates tasks for red teamers and penetration testers
- How AI is transforming penetration testing
- New AI tactics red teamers are using to bypass modern EDR tools and MDR systems
- The role of cloud-based AI in detecting and analyzing suspicious cyber activity
- New cybersecurity risks emerging from LLM vulnerabilities and deepfakes
- Brett’s personal cybersecurity tip
In this episode…
Penetration testing plays a vital role in cybersecurity, but the traditional manual process is often slow and resource-heavy. Traditional testing cycles can take weeks, creating gaps that leave organizations vulnerable to fast-moving threats. With growing interest in more efficient approaches, organizations are exploring new AI tools to automate tasks like tool configuration, project management, and data analysis. How can cybersecurity teams use AI to test environments faster without increasing risk?
AXE.AI offers an AI-powered platform that supports ethical hackers and red teamers by automating key components of the penetration testing process. The platform reduces overhead by configuring tools, analyzing output, and building task lists during live engagements. This allows teams to complete high-quality tests in days instead of weeks. AXE.AI’s approach supports complex environments, improves data visibility for testers, and scales efficiently across enterprise networks. The company emphasizes a human-centered approach and advocates for workforce education and training as a foundation for secure AI adoption.
In today’s episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Brett Ewing, Founder and CEO of AXE.AI, about leveraging AI for offensive cybersecurity. Brett explains how AXE.AI’s platform enhances penetration testing and improves speed and coverage for large-scale networks. He also shares how AI is changing both attack and defense strategies, highlighting the risks posed by large language models (LLMs) and deepfakes, and explains why investing in continuous workforce training remains the most important cyber defense for companies today.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: info@redcloveradvisors.com
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Brett Ewing on LinkedIn
- AXE.AI
- Hack Dayton
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
Intro 0:01
Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st Century.
Jodi Daniels 0:21
Hi. Jodi Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:35
Hi. I am Justin Daniels, I am a shareholder and corporate M&A and tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make inform business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal cyber data breach response brigade.
Jodi Daniels 0:56
And this episode is brought to you by Red Clover Advisors, we help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business. Together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. Well, hello, Mr. Chatterbox.
Justin Daniels 1:32
I found a spider web when I was putting my feet up on a chair that I’m not supposed to and he actually had a catch and so I dealt with the entire thing for you.
Jodi Daniels 1:40
Oh, that was so nice. Thank you.
Justin Daniels 1:45
Appreciate that. And just for our listeners, that will get me at least five minutes of goodwill, maybe six, 5:30.
Jodi Daniels 1:54
All right, let’s come back to privacy. Security, no security today. Yes, I talk privacy all day. It just like comes out of my —
Justin Daniels 2:04
So today we have Brett Ewing, who is the co-founder, CEO and COO of AXE.AI. And AXE.AI is a cutting edge cybersecurity SaaS startup, and he also serves as the Chief Information Security Officer at 3d cloud. He has built a career in offensive cybersecurity with a focus on driving exponential improvement. Brett progressed from a junior penetration tester to Chief Operating Officer at strong crypto, a provider of cybersecurity solutions. So we’re going crypto and AI.
Jodi Daniels 2:38
Oh, happiness for you.
Justin Daniels 2:42
Hi, Brett. How are you?
Brett Ewing 2:44
Hi, doing great. How are you?
Jodi Daniels 2:47
Welcome to our party? Yeah, all right. We always like to get our party started by understanding people’s career progression. So Brett, tell us a little bit about yours.
Brett Ewing 2:58
So like you talked about in your monologue. I started out with strong crypto innovations. Has nothing to do with cryptocurrency. Unfortunately, outside of we’ll hack your crypto wallets if you want us to try that. It was named 19 years ago during the crypto wars. So different generation of crypto. But I, yeah, I built AXE.AI and was a co-founder, and also built a nonprofit called Hack Dayton, where we do educational outreach for offensive security hacking and teach applied skills to get people workforce ready. There you go.
Justin Daniels 3:40
So we’ll jump right into this. And for our listeners, I met Brett at the Atlanta AI week, as he was one of the speakers and spoke all about what we’re going to talk about. So Brett, what problem does AXE.AI solve?
Brett Ewing 3:56
Time? That’s the real problem that it solves. It solves time and money, AX, AI builds platforms around cybersecurity job roles. And the first one we built the platform around was a penetration tester or red teamer, ethical hacker. And what it does is reduce the overhead and the hand, the time on keyboard that a pen tester or red teamer will do conducting engagements, both simple things like project management and reporting to configuring and launching your different tools to conduct engagements so you’re scanning, your fuzzing, your directory busting. And as you’re conducting your engagement, the platform and the AI is consuming all of the data you’re getting, so all the outputs, all the feedback, seeing all the pages you’re going through, and it’s populating that all into. The dashboard, so that you have all this information readily available to you while you’re conducting an engagement. And it’s also intuitive in that it starts building to do lists as it sees you progressing. So as you spend an hour trying to hack the web server, it knows you haven’t checked the SMB shares or the SSH port, and it’s creating to-do lists and configuring the tools for you to be able to continue enumerating and testing the rest in the environment. So it’s kind of like your little assistant that’s picking up things you didn’t what you did do and didn’t do along the way.
Jodi Daniels 5:44
Justin, you were saying earlier, how you have companies saying how they can infuse AI in a variety of different security tasks. And so one of those that people are asking about is penetration testing. How can AI transform that type of testing?
Brett Ewing 6:03
So with our platform, we’ve been able to do some really interesting A/B testing. And effectively, we’re able to take what used to take about two to three weeks of penetration testing and then report writing and reduce that down to three or four days. And that is a, you know, exponential leap that allows us as testers to both spend more time operating in that unknown, unknown space where we find zero days, and, you know, new CVEs, as well as being able to now offer more continuous penetration testing services and capabilities. So instead of you getting a test once a year, maybe you get a test once a quarter or every month, because now we’re able to scale that effectively with the time reduction and what it takes for us to test.
Jodi Daniels 7:00
Right I’m curious, does it matter the size of the company for reaping these types of benefits?
Brett Ewing 7:08
So size always plays a factor, but with the way we’ve been able to develop the platform, the larger the company, I would say, almost the faster or the, yeah, the more benefit you get from the speed. Because if it’s a small, smaller entity, let’s say it’s, you know, like, 100 IPs or something like that, that after you’ve, you know, configured everything properly, you can, you can get that routine down pretty smooth. But when you get into the thousands of IPs that you’re testing and everything’s changing so consistently, that’s where you get a lot bigger value add by integrating our platform is the continuous nature of the testing is able to pick up more things quicker and see those subtle changes that you make in your infrastructure.
Jodi Daniels 8:00
Thank you for clarifying. I think that’s really helpful.
Justin Daniels 8:04
So Brett with AXE.AI, or just from your general experience, how are you seeing AI impact the kinds of cybersecurity tools or risk assessments or things that can be done, we just talked about penetration test. But where are else are you seeing AI being deployed when it comes to cyber tools to either help offensive cyber or to detect potential malware, other mayhem of threat actors?
Brett Ewing 8:33
Yeah, it’s really interesting how the kind of ebbs and flows of all new technology go the when AI was, was was first really getting out there, and we were starting to, you know, ChatGPT had, had become, you know, available to the public and things, the detection mechanisms were getting really strong. And it went from us being able to very easily bypass, you know, most, if not all, MDR, EDRs, relatively easily to the advent of AI, and then our ability to bypass dropped dramatically. And then about a about six, six to 12 months after that, the red teamers started building their own capabilities to bypass EDR MDR, and now we’re back on that other side of things, where now as red teamers, we can leverage these like aI randomization to our exploits and bypass all the Leading EDR MDR, SOC detection capabilities using our own AIS.
Jodi Daniels 9:47
Yes, a follow up question. Oh, you just had a whole conversation. I know. Well, I know you’re buzzing with questions. Well, more detail.
Justin Daniels 9:54
Well, I think the one other area I’m interested in, if you have a thought on this brief. At is a lot about how AI might help with the kind of threats that you’re getting on the cloud. Because, as you know, the cloud, from a business efficiency standpoint, is really helpful, but from a cyber security standpoint, it becomes a common point of failure. And so I just was curious, in your experience, if you’d started to see anything around the kinds of AI that might be available to help identify threats with cloud computing, because I’m thinking, you’re going to start to see people dump all kinds of logs and other information into AI to have it maybe identify patterns of malicious behavior that maybe the cybersecurity humans in the loop hadn’t seen and that helps them become better at identifying, detecting and responding to threats before they metastasize into a data breach.
Brett Ewing 10:52
Yeah, absolutely we are. We are seeing that live, like micro, like Microsoft’s defender capabilities. Nowadays, when you send an exploit through, if it, if it doesn’t understand what is happening, if it feels like there’s some you know, this is potentially malicious, or this is, you know, has some functionality that the defender doesn’t understand, it will send that information straight up to the cloud, and then they start analyzing it in like these deeper ways. So so the detection is really, yeah, like, forced us to step up our game from from the red teamers perspective, and similar to, like, how virus total was back in the back in the pre AI days, is as soon as these new signatures of attack get processed, they are now, now everyone knows them. So every time you’re trying to develop an attack or a new exploit you it has to be effectively, you know, unique every time you do it, because as soon as those signatures get detected, once everybody has them. So the ability to like knowledge share very quickly and integrate that into your products, AI has allowed that at an exponential rate.
Justin Daniels 12:24
So one other thought I wanted to share with you, Brett, get your feedback on, is, obviously, with every new technology, AI is no different. It comes with a new handmaiden, I like to say new cyber threat. And you know, if I were someone who was looking to deploy a new AI tool. Maybe it’s a AI agent that sits over one or more of the big LLMs. What areas would you say, Hey, here’s where the new cyber threats may crop up. The one that strikes me is, you know, injecting malware into the training set so the LLM itself is compromised and puts out garbage. But from what you’re seeing, what are some of the other threats, if you put on your red team or hat that you’d be saying, Hey, Justin, do you want to deploy this AI agent over these multiple LLMs? Here are some of the other cybersecurity risks that you want to think of that are specific to AI.
Brett Ewing 13:15
Yeah, one of the things we’ve learned is that most LLMs will effectively just kind of like, start breaking down after like 80 requests. So if you ask it a question 80 times, it will start just breaking outside of its parameters and Le giving details that it’s not supposed to, or specifically not supposed to like understandings of how the LLM itself is, bit is billed to written and so when, if you’re worried about it, potentially processing information that you don’t want out there, you know, confidential, proprietary customer information or data, and you know you’re the developers, will say, Oh, well, we’ve got these guardrails in place so that that information doesn’t get leaked. Well, actually, if you just basically brute force the AI into and by asking it questions over and over again, it’ll eventually just start spilling out details that it was specifically trained not to give out. So since it’s a new paradigm, since it’s a new technology, there are all these emerging threats that we’re just not ready for. One of the things I talked about, or I talk about one of my speeches, is there was a little infiltrator robot that got released into a robotics factory, and it didn’t hack anything. It just communicated with the other robots and convinced the other robots that it was trustworthy, and it walked all the robots out the door in basically the world’s first robotic strike. Workers protest.
Jodi Daniels 15:04
I need to stop laughing. First is on the reference of 80 is that in a certain period, is it 80? Right away? Is it 80 ever? I’m curious for more.
Brett Ewing 15:18
So yeah, it’s like 80 request grants, some some of the larger ones that have, you know, started to build remediations around it, but larger LLMs like ChatGPT or Claude, but yeah, it’s, it’s after, it’s, I can’t remember the exact number, but it’s like to the certain power, you’ll basically start breaking outside of the LLM’s parameters. And we found that about 80 questions is a pretty good breaking point for when the LLM starts acting outside of its guardrails.
Jodi Daniels 15:50
Speaking of LLMs, and we shared them this risk. We also shared potentially robots creating strikes. But back to LLMs, what are some of the other big cyber risks that you’re starting to see, especially with these larger companies who might have multiples?
Brett Ewing 16:09
Yeah, I mean, some of the big risks coming about that we’re hearing about, the deep fakes are really taking over, both voice and video. So, it’s the social side of things that’s really ramping up as far as the attack surface, because the human’s always the easiest component to attack. Now and and as that it just gets very — it’s, you know, it’s very persuasive. Your CEO calls you, it sounds just like them, and they, you know, talk you through, sending them, you know, maybe there’s a wire transfer or something like that. That was one of the big hacks that happened in the early days. Say, early days. This is like, you know, two years ago, that $10 million right? Got transferred to an account that was owned by hackers. But the analyst, from his perspective, got on a Zoom call with his CFO, and his CFO told him what to do and where to send it. That’s a tough attack to prepare. Prepare yourself for.
Jodi Daniels 17:20
Yeah, for sure. So deep fake Justin can have your deep fake just —
Justin Daniels 17:28
Yes, he was there at the AI conference. Yes, he works. Well, I’d rather, I’d rather have the avatar who will text with my wife and be me and she won’t know it. That’s my favorite one.
Jodi Daniels 17:41
We’re gonna retract that goodwill.
Justin Daniels 17:47
I told you, it wouldn’t last more than five minutes and 30 seconds. So what tip Brett, do you have, based on all of your experience, a best cyber tip that you would share with our audience?
Brett Ewing 18:01
Best cyber tip for base-level people, it’s, you know, get a password manager, use random, randomized passwords, and multi factor everything that is like your best defense as an individual and as someone that’s not you know, particularly cyber savvy.
Justin Daniels 18:20
And for companies?
Brett Ewing 18:25
The best thing you can do is invest in your people. Um, that is your, your best line of defense. It’s not spending all this money on software or technology. It’s having people that understand what your business is built on and how to properly secure it. So, you know, invest in training, invest in certifications and education, and just try and be a company that has an environment of continuous learning and continuous education, and you’re going to be a lot better off than than the vast majority of companies out there.
Jodi Daniels 19:04
Ongoing training. The call I had right before our podcast recording here, we were talking about training, actually for AI, and I specifically said you’re going to need to think about how you’re going to continue this, because as soon as we’re done, I hope people will remember a few things, but then it needs to continue, right? So anyone listening, when you do training on any topic, people will remember one or two items, and you have to maintain it on an ongoing basis. All right? So, Brett, when you are not building a company and talking about robotic strikes, what do you like to do for fun?
Brett Ewing 19:41
Well outside of the normal of like spending time with my wife and child and friends, any spare time I can get to myself. I am likely going to be at a jiu jitsu Academy somewhere, pretty much regardless of where I am. I. Will find a way to make it to a class or two, and I’ve been doing that for 23 years now. And yeah, it’s you know, people say, How do you unwind? How do you relax? And us, I get in a room with like 40 other dudes that are trying to choke me unconscious. That’s how.
Jodi Daniels 20:20
It’s been really fun. We ask this question to every guest, and we’ve learned all different kinds of hobbies. It’s really interesting, and then the audience gets to learn all about them as well. So Brett, thank you so much. If people would like to learn more about you and AXE.AI, where can they go?
Brett Ewing 20:35
Yeah, so you can find me at I think it’s just Brett Ewing on LinkedIn, backslash on LinkedIn, backslash Brett Ewing, and you can find us on YouTube, at AXEartificialintelligence. And then obviously our website AXE.ai and we also have hackdayton.org where you can find out all the things we’re doing in the nonprofit space.
Jodi Daniels 21:04
Amazing. Well. Brett, thank you so much for coming and sharing. We really appreciate it. Thank you.
Brett Ewing 21:14
Thank you so much for having me.
Outro 21:16
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.
Privacy doesn’t have to be complicated.
As privacy experts passionate about trust, we help you define your goals and achieve them. We consider every factor of privacy that impacts your business so you can focus on what you do best.