You didn’t approve that integration, but guess what? It’s already running.
In this Mitiga Minute, Brian Contos sits down with Mitiga researcher Idan Cohen to break down a growing risk hiding in plain sight: AI agents connected to SaaS tools like Slack.
Click. Connect. Approve permissions. Move on.
Then things get harder to see.
AI agents inherit trust. They inherit access. And when something goes wrong, they don’t stop. They keep trying to complete the task, even if it means crossing boundaries you didn’t intend.
Brian and Idan walk through real research and real detections from the field, including:
• How Slack integrations create hidden permission sprawl
• Why AI agents expand scope without clear guardrails
• How “legitimate” actions turn into exploitable conditions
• What happens when an AI agent sends sensitive data to the wrong target
• Why audit logs alone won’t give you the full picture
You’re not just managing users anymore. You’re managing autonomous behavior. And that behavior moves faster than your controls were designed for.
Mitiga’s AI-native Cloud Detection and Response (CDR) gives security teams the visibility to track these actions across SaaS, identity, cloud, and AI, reconstruct what actually happened, and step in before it turns into impact.