In this episode of Mitiga Mic, Brian Contos, Field CISO at Mitiga, sits down once again with Ofer Maor, CTO and Co-founder, to break down one of today’s most urgent cybersecurity challenges: the intersection of Artificial Intelligence (AI) and Detection & Response. From the Automated SOC to AI-powered attackers and cloud-based AI infrastructure threats, Ofer outlines the three pillars of AI-DR (AI Detection and Response) and what organizations need to know now and in the near future.
Want more expert insights?
Subscribe to Mitiga on YouTube to explore past episodes and stay ahead of what’s next in cloud and SaaS security with Mitiga Mic.
Featuring: Ofer Maor
CTO & Co-founder, Mitiga
Brian Contos (Field CISO, Mitiga):
Ofer, welcome back to Mitiga Mic.
Ofer Maor (CTO & Co-founder, Mitiga):
Hi, thanks for having me again.
Brian:
No, it’s a real pleasure to have you on board. You’re our first repeat, which makes a lot of sense being our CTO, and you have a lot to add. I want to jump right into today’s discussion. There’s been a lot of talk about AI, particularly AI as it relates to detection and response—or what people are calling AIDR. So first off, what is AIDR? What’s all this about?
Ofer:
That’s a great question. What I’ve learned in the last few months is that a lot of people use the same term to describe completely different things. And it’s not new—people like buzzwords: XDR, CDR, AIDR, and so on. But they really think about different things.
Before we dive into that, I’d like to suggest my own classification—three things. These are broad strokes; we could break it into 50 things, but let’s start with the big three pillars that I think distinguish the different aspects of detection and response intersecting with AI.
The first intersection is using AI for detection and response. I think that’s probably the most discussed and implemented right now. We’ve seen millions of startups go into this space. Basically, it’s saying: we have this problem—detection and response, trying to detect when a threat actor does something malicious—endpoint, EDR, network, MDR, cloud, CDR. It’s not necessarily an AI-centric attack. It’s just using AI to detect bad things.
Brian:
Exactly. Using AI to help triage, detect anomalies. I mean, we’ve been using AI for years, but now when people say AI, they mean Gen AI.
Ofer:
Right. It’s about leveraging this transformative, spectacular technology to do the work. If you look at detection and response, it’s what a lot of people start with when they get into security. It’s an entry-level job—SOC analyst—and that’s where AI shines: taking repetitive, tedious tasks done by people with starting skills and letting AI do that.
Brian:
And you made a subtle comment earlier. You said leveraging generative AI, but we—and others—use it along with other things that have long been classified as AI: anomaly detection, pattern discovery, volumetric analysis, temporal analysis. Those are still part of the equation, right?
Ofer:
Of course. Somebody smart told me recently: at some point, you stop talking about the new technology by name and just talk about what it does. Detecting malicious behavior with anomaly detection is a use of AI, but you just say what it does. Now Gen AI is the new buzz, but over time we’ll focus on the applications.
Using Gen AI for detection and response is transformative. There’s still an ongoing discussion about the autonomous SOC—fully autonomous or AI-augmented people? Opinions vary, but either way, we’re going to shift a big chunk of what people used to do to AI. It’ll make us much better at detection and response.
Brian:
So that’s one of your pillars. What’s number two?
Ofer:
Pillar two is detecting attacks where the attacker is using AI. Just like we’re leveraging AI for detection and response, the other side is leveraging AI to become better attackers.
I’ve seen some fantastic startups in this space—penetration testing, red teaming, app testing with AI. The results are mind-blowing. Capture-the-flag competitions that AI can do today are impressive. When this technology becomes commoditized—two, three years from now—attackers will be using it constantly.
Brian:
Two to three years? That’s quick.
Ofer:
Maybe it’ll be four, but the point is hacker orgs don’t do cutting-edge R&D, they wait until the tech matures. And a lot of startups are already doing automated AI red teaming and pentesting. When that tech matures, some will open-source it, and the basic capabilities will become widely available.
What happens then? Immense increase in attack volume—AI running phishing campaigns at scale, for example. So that detection and response AI we built isn’t just a nice-to-have anymore—it becomes necessary because people won’t be able to keep up with AI-speed attacks.
We need AI not just to help detect traditional attacks but also to detect and respond at AI speeds. That means remediating, blocking, containing—otherwise we won’t be fast enough.
Brian:
So pillar one is using AI as part of your defense. Pillar two is that now attackers are using AI, so we need to detect and respond to that. Got it. What’s the third?
Ofer:
The third is attackers targeting your AI infrastructure. That’s the one most people are concerned about today, because the attacker-AI stuff is still a couple years out.
Enterprises are starting to use AI—even just SaaS AI like ChatGPT or Claude. Others are building AI applications on Bedrock, integrating AI into their services. That infrastructure becomes a target.
When people build new things with new technology, you get misconfigurations, new vulnerabilities, and poor practices. AI is especially risky because it’s so willing to share data it has access to. So we’ll see a lot of interesting attacks in this space.
Brian:
I was just at a conference in Pittsburgh—BSides Pittsburgh. They were talking about people writing complex code without real coding or security expertise. The volume of code being created is exploding. The security holes are massive. It’s a candy store for attackers.
Ofer:
Exactly. I’ve been through three security waves: AppSec around 2000, cloud security a decade ago, and now AI. Each time, there’s tons of innovation, attacks, defenses—then it stabilizes. These early years are the most exciting. But AI adoption is happening way faster than cloud ever did.
Brian:
Totally agree. Let me recap your three pillars:
- Organizations use AI to improve processes and detection and response.
- Attackers use AI to attack us.
- Attackers target our AI infrastructure itself.
Is that right?
Ofer:
Yes. And even though you could call all three “AIDR,” they’re completely different things.
Brian:
Right. So let’s focus on the first one—autonomous SOC. We’ve been hearing about it for a while. Some love it, some hate it. Where does that fit?
Ofer:
That’s pillar one, and a bit of pillar two. Let’s talk SOCs. They’re overworked. Even before AI, they couldn’t handle the volume of data, alerts, and noise. Vendors err on the side of more alerts to avoid missing things. So you get lots of volume.
We’ve tried automation, contextualization, triage. Still, SOCs are overwhelmed. Many orgs now outsource to MDRs. But even MDRs are struggling. AI—especially reasoning models—can do a lot of that work, particularly tier one and tier two tasks.
Tier one analysts are entry level. The knowledge depth isn’t huge, and the work is repetitive. LLMs are really good at that. Letting AI do that work is here—it’s real. It’s not perfect, but neither are humans doing shift work under stress. LLMs probably make fewer mistakes.
Brian:
Exactly. AI replaces the tasks people don’t want to do anyway. At Mitiga, we offer managed CDR, and we already use AI triage internally. The team loves it—they don’t have to sift through repetitive alerts. They get to focus on deeper investigations.
Ofer:
Right. It’s a win-win. There’s a huge shortage of talent. My first startup, RIPC, was an MSSP. We used to say: the people smart enough to tell the difference between a false positive and real attack don’t want to do that job. You can’t pay them enough.
Brian:
Let’s talk about Mitiga specifically. How do we approach AI for detection and response?
Ofer:
We use best-in-class models and tune them. Our platform was built from day one with deep context. That cloud telemetry and contextual data make AI far more efficient.
A startup just pulling alerts from a bunch of products won’t have that same context. It’s hard to triage well with just alerts. But we already have the cloud data, the profiles, the normalization—that gives the LLM everything it needs to succeed.
Brian:
And we’re not dependent on alerts. We get them, but we focus on logs—cloud, SaaS, identity, and more. Doing this at scale is really hard, but once you solve that, you unlock a lot.
Ofer:
Exactly. Every cloud logs differently—IDs, usernames, internal strings. Normalizing all that and adding context gives the LLM what it needs. AI works best when it has help understanding the context.
Brian:
And AWS changing their log schema recently—vendors don’t always communicate changes. It’s constant work staying on top of it.
Ofer:
Yes. AWS was great about it—early notice, good documentation. But many vendors just change logs without notice. We monitor log formats daily, catch changes immediately, and adapt.
Brian:
And logging usually lacks product management. No one really wants to do it.
Ofer:
Exactly. It’s an engineering task that gets pushed down the list.
Brian Contos:
Okay. So, we’ve covered pillar one—how Mitiga uses AI for better detection and response. Let’s move to pillar two: stopping AI-powered attackers. What’s really different about these attackers? And how does that change how we do detection and response?
Ofer Maor:
For many years, we’ve heard that the average dwell time of an attacker is 200 days. Maybe that number’s overused, but the idea is that there’s usually a long phase between initial access and when attackers actually cause damage.
Typically, attackers get in, then spend time doing reconnaissance—finding the right email thread for BEC, locating sensitive data for ransomware, understanding backups so they can disable them before encryption, and so on. That’s the value of solutions like Mitiga. If we can catch the attacker during that phase, we can block them before damage occurs. That’s detection and response—stop the impact, even if you can’t prevent the initial access.
But here’s the problem: AI does that recon much faster. AI is diligent. It doesn’t rest. You can run 5,000 agents in parallel. So instead of a week of recon, it can happen in minutes. It’s not just technical recon—it’s contextual: “This is a financial transaction,” “That’s sensitive data,” “Here’s the backup system,” “This is the admin.”
That means our detection and response has to be really fast. We can’t wait hours. We have to detect the first AI agent doing recon and block it within a minute.
Brian:
So, the work is the same—but the timeline is drastically compressed.
Ofer:
Exactly. And it introduces something very contentious: autonomous or non-human-in-the-loop remediation.
Today, even though many customers want Mitiga to offer automated remediation—and we can—most still want a human to approve changes. Like, “Okay, someone look at this before we block that user.”
But if we want to react in under a minute, there’s no time for a human in the loop. That’s going to be the biggest change in pillar two.
Brian:
I’ve spoken with several CISOs recently, even just in the last quarter, who are becoming much more comfortable with this. Not because they want to, but because they have to. It’s a self-preservation thing. They need a break-glass option—react fast, take action now, and if it was a mistake, just roll it back.
Ofer:
Exactly. That’s why we’re focused on reversible remediation actions. If I can roll something back easily, then I can afford to move fast. For example, if AI blocks a user and then two minutes later realizes it was a false positive, it can just reverse the block. Maybe the user never even noticed. Same password, same access—no harm.
Or maybe we contained a machine, and then brought it back online quietly. With good load balancing, nobody noticed. That’s where we should be going.
Brian:
Makes total sense. Let’s recap before we go into the final section.
- We talked about how Mitiga uses AI for detection and response.
- We covered how attackers use AI, and how that impacts our detection and response strategy.
So now let’s go into pillar three: attacks on AI. What does that really mean? Is AI part of the cloud? Is it its own layer? What are these AI security startups actually talking about?
Ofer:
We could divide this into ten subcategories. There’s a big difference between attacking ChatGPT or your private instance of ChatGPT versus attacking a model you trained internally. It’s a huge spectrum.
What you see with a lot of AI startups today—especially in early-stage markets—is that they’re taking a vertical approach: “I do cloud security,” or “I do AI security,” and trying to solve everything in that domain. But as security matures, you start to see the horizontal approach: posture, prevention, detection and response—each targeting different personas and use cases.
So, if I build an AI governance product, I’m probably not focused on detection and response for SOC teams. Different needs. Huge space. Lots of opportunities.
Even within the AI space, we see problems like prompt injection, permission control, and simple mistakes where AI shows data it shouldn’t. Those are big issues.
Brian:
But let’s bring it down to something basic. What’s an example of a simple attack?
Ofer:
Let’s say an attacker compromises an identity—which accounts for about 70% of cloud attacks—and uses that identity to start exfiltrating data from an AI service. That’s not a sophisticated AI attack, but it’s real. And most organizations won’t detect it.
So AI detection and response starts at that basic level and scales up to more complex threats.
And in that sense, it’s very much a cloud problem. Most organizations will build AI services in the cloud—Bedrock, Vertex, Azure OpenAI. It’s part of your cloud infrastructure, like S3 or KMS or databases. These AI services suffer from the same cloud security issues: misconfigurations, compromised identities, permissions abuse—plus the extra risks specific to AI.
Brian:
That’s a critical point. Detection and response can’t be siloed. You can’t just focus on SaaS or identity or AI in isolation. Threat actors don’t care. They don’t work in silos. They move laterally across everything.
Ofer:
Exactly. Let’s say an attacker uses your AI internally because it has access to S3 buckets. Is that an AI attack? Or a cloud attack? It’s both. Everything is interconnected. You need full context.
Brian:
It reminds me of what we see in cloud mesh: sales cares about Salesforce, HR cares about Workday, developers care about GitHub. But attackers don’t care. It’s all one big surface.
Ofer:
Right. The very thing that makes the cloud so powerful—its interconnectivity—is also its greatest weakness.
Brian:
Okay, final question. Mitiga has been AI-first since day one. A lot of vendors are bolting AI on now. That creates a lot of confusion for buyers. Specifically around AI for detection and response in the cloud—what should customers ask to separate reality from vaporware?
Ofer:
Great question. The amazing thing about GenAI is that it doesn’t create a big moat. You can build a cool app with AI in a couple of days. But that also means anyone can. That demo may look amazing, but it may not scale, and it may not provide depth.
What gives you a lasting edge is what’s behind the AI. For us, that’s the data lake—deep, contextualized cloud telemetry. Without that, your AI has limited value.
We’ve spent years normalizing the data, enriching it, building profiles and checks. That gives our AI real context and real fidelity. That’s what buyers should ask: does the platform provide real value beyond clever prompts?
Brian:
And because we run our own MDR, we’re constantly refining our AI based on what we see in the real world—real IR, real investigations.
Ofer:
Exactly. Our AI evolves based on real detection, response, and incident handling. That’s very different from just stringing together prompts around an LLM.
Brian:
Perfect. Ofer, thanks again for joining Mitiga Mic. Always a pleasure.
Ofer:
Thank you for having me. I’m sure we’ll speak again soon.