Logs are everywhere—the digital records of events and actions that have taken place in every hardware system, application and network. All of your digital environments generate a log of some form.

When cyberattacks occur those logs become vital sources of information and pathways for investigation. Having access to forensic data and relevant logs is crucial for organizations to quickly identify, contain, and mitigate security threats.

On-premises, this is a straight-forward task as physical systems and associated log data are under organizational management and control. In the cloud, that’s not so.

Typically, organizations rely on cloud providers to host their applications and data. This creates management and visibility challenges because while cloud providers offer various logging capabilities, they often fall short of providing all the logs needed for effective incident response.

Speed is always of the essence when a cyber breach or security incident occurs as teams race to get the necessary information to mitigate breach impact. Time is—quite literally—money when it comes to a data breach incident. So having the right types of forensic data, including the right logs, is critical to investigations and cannot be an afterthought.

The IR challenges cloud logs present

You might be wondering—doesn't my cloud or SaaS provider already have all the log and forensic data that I’ll need when a breach occurs?

The real answer is, only sometimes. Cloud providers and SaaS vendors have lots of logs. And some of those logs can be made available to an enterprise when a breach happens. The challenges come related to how fast that data can be shared and how much data the providers are able or willing to provide.

When investigating a data breach incident there is no clear-cut line related to how much log and forensic data you need or how far back you need to go. According to IBM’s Cost of Data Breach Report, 2022, the average time to discover a breach is 277 days (about 9 months), which means that you need much older logs to safely cover that expanse. None the less, attackers don't tend to exploit your data immediately. In fact, they’re often present in systems for months before doing anything malicious. So, the attack could have happened much before it was exploited and even became a breach.

Cloud and SaaS service providers typically limit the scope of their logs, both in terms of what data they collect and how long they retain it. In many cases, providers keep logs for only a week or less, which is usually insufficient for effective incident response. And obtaining logs from providers can be slow, with some SaaS platforms throttling download times, to save cost, causing data acquisition to take over 24 hours to provide just 24 hours' worth of log data. These delays can be crippling when time is critical in addressing an attack.

Now that we understand some of the challenges with logs in our SaaS and Cloud environments, are logs alone enough? Not typically.

One of the many benefits that cloud offers enterprises is its dynamic infrastructure— allowing teams to build and dispense with parts of the cloud environment as needed. One of the challenges this causes investigators is that even if we have the logs for a certain environment, the environment node might not exist anymore. So besides keeping the right logs, it's also necessary to understand contextualized data, such as the cloud configuration and the security setting at the time of the attack.  

Facilitating investigation across your cloud environments

In the modern, complex world of IT, most organizations use multiple clouds and dozens (if not hundreds) of IaaS, PaaS and SaaS instances. So incident response rarely, if ever, is contained to anyone individual system or application.

If a business process was hacked in one cloud provider and you have the same business process in another, you need to be able to query both environments. That's not something that the cloud or SaaS providers enable easily if all.

Adding further complexity is the fact that different providers and applications can have different log data formats, so even if you could get access to all the required data it’s still a challenge to query all of it using a unified approach. That's where a centralized forensic investigation solution comes in.

The importance of being able to search across all forensic data in one place cannot be overstated. Traditional methods of searching through individual logs and data streams are no longer sufficient. Security teams need a more efficient and effective way to investigate breaches and find threats.

The right cloud investigation and response automation (CIRA) solution provides exactly that. By enabling investigation across all available data in one place, at the same time, this approach allows for faster identification of anomalies and quicker response times. It also offers many added benefits, including improved compliance, reduced costs associated with manual analysis, and increased efficiency in investigations.

Given the increasing complexity of cyberattacks and the growing volume of data generated by cloud and SaaS environments, it's clear that traditional methods of searching through individual logs and data streams are no longer up to the task.

LAST UPDATED:

April 23, 2024

Don't miss these stories:

From Breach Response to Platform Powerhouse: Ofer Maor on Building Mitiga for Cloud, SaaS, and Identity Security

Solutions Platform Helios AI Cloud Security Data Lake Cloud Threat Detection Investigation and Response Readiness (TDIR) Cloud Detection and Response (CDR) Cloud Investigation and Response Automation (CIRA) Investigation Workbench Managed Services Managed Cloud Detection and Response (C-MDR) Cloud Managed Threat Hunting Cloud and SaaS Incident Response Resources Blog Mitiga Labs Resource Library Incident Response Glossary Company About Us Team Careers Contact Us In the News Home » Blog Main BLOG From Breach Response to Platform Powerhouse: Ofer Maor on Building Mitiga for Cloud, SaaS, and Identity Security In this premiere episode of Mitiga Mic, Mitiga’s Co-founder and CTO Ofer Maor joins host Brian Contos to share the journey behind Mitiga’s creation—and how it became the first purpose-built platform for cloud, SaaS, and identity detection and response. Ofer discusses why traditional incident response falls short in modern environments, how Mitiga built its platform from real-world service experience, and the crucial role of automation and AI in modern SOC operations.

Helios AI: Why Cloud Security Needs Intelligent Automation Now

Mitiga launches Helios AI, an intelligent cloud security solution that automates threat detection and response. Its first feature, AI Insights, cuts through noise, speeds up analysis, and boosts SecOps efficiency.

Hackers in Aisle 5: What DragonForce Taught Us About Zero Trust

In a chilling reminder that humans remain the weakest component in cybersecurity, multiple UK retailers have fallen victim to a sophisticated orchestrated cyber-attack by the hacking group known as DragonForce. But this breach was not successful using a zero-day application vulnerability or a complex attack chain. It was built on trust, manipulation, and a cleverly deceptive phone call.

No One Mourns the Wicked: Your Guide to a Successful Salesforce Threat Hunt

Salesforce is a cloud-based platform widely used by organizations to manage customer relationships, sales pipelines, and core business processes.

Tag Your Way In: New Privilege Escalation Technique in GCP

GCP offers fine-grained access control using Identity and access management (IAM) Conditions, allowing organizations to restrict permissions based on context like request time, resource type and resource tags.

Who Touched My GCP Project? Understanding the Principal Part in Cloud Audit Logs – Part 2

This second part of the blog series continues the path to understanding principals and identities in Google Cloud Platform (GCP) Audit Logs. Part one introduced core concepts around GCP logging, the different identity types, service accounts, authentication methods, and impersonation.