From Pull Request to Platform Compromise in Days
What if a single pull request could compromise five software ecosystems in less than a week?
Not a zero-day. Not a brute-force attack. A pull request — submitted to a public repository, processed by an automated workflow, and rewarded with the keys to the kingdom.
In March 2026, a threat actor known as TeamPCP turned that scenario into reality. Starting with a misconfigured GitHub Actions workflow in Aqua Security’s Trivy vulnerability scanner, they harvested credentials that cascaded through GitHub Actions, Docker Hub, npm, the Open VSX Registry, and PyPI. According to Mandiant CTO Charles Carmakal, more than 1,000 SaaS environments were actively impacted — a number he projected could grow to 10,000. Tools trusted by millions of developers became delivery mechanisms for credential theft, lateral movement, and persistent backdoors.
The attack itself has been well documented by CrowdStrike, Wiz, and others. What hasn’t been explored is how to actually detect and respond to each phase in real time. That’s what this post focuses on: concrete detection logic for every stage of the kill chain, the cross-platform correlation that turns isolated alerts into an investigation, and what this campaign reveals about the speed gap between supply chain attacks and the organizations trying to catch them.
If you’re already familiar with the TeamPCP campaign, skip to the Detection section.
The Core Problem
Modern software gets built by pipelines — automated workflows that pull code, run tests, sign artifacts, and push releases. These pipelines hold secrets: API tokens, service account credentials, signing keys, publishing tokens. And they trust the code they execute.
GitHub Actions workflows triggered by pull_request_target are a well-known example. Unlike standard pull_request triggers, pull_request_target runs in the context of the base repository — with full access to secrets and write permissions — regardless of where the pull request came from. An external contributor submits a PR, the workflow runs their code with elevated privileges, and any secrets passed to that workflow are exposed.
This is by design. And it’s exactly what TeamPCP exploited.
The Attack Flow
1. Steal the Token (Late February)
TeamPCP submitted a pull request to Aqua Security’s Trivy repository. A pull_request_target workflow executed with secrets access, exposing the aqua-bot service account’s Personal Access Token (PAT). Aqua attempted credential rotation, but residual access remained.
Weeks later, TeamPCP came back.
2. Hijack the Scanner (March 19)
Using the stolen PAT, TeamPCP force-pushed nearly all release tags in aquasecurity/trivy-action to point at malicious commits. No new releases. No branch pushes. Just silent tag rewrites — every CI/CD pipeline referencing these tags by version number now pulled attacker-controlled code.
The payload scanned process memory and filesystems for secrets, encrypted the stolen data, and exfiltrated it to a typosquatted domain. Then it executed the legitimate Trivy scanner, so workflows appeared to complete normally.
Malicious Docker Hub images (Trivy v0.69.4–0.69.6) established persistence via a systemd service polling an Internet Computer Protocol (ICP) canister — blockchain-hosted infrastructure resistant to traditional domain takedowns.
3. Spread Through npm and Beyond (March 20–23)
Credentials harvested in Phase 2 included npm authentication tokens cached on developer machines. TeamPCP deployed CanisterWorm — what researchers have described as the first publicly documented malware to use ICP canisters for command and control. The worm scanned infected machines for npm tokens and automatically published malicious versions of packages those tokens could access, without human intervention. Dozens of npm packages across multiple scopes were compromised.
Three days later, TeamPCP used a compromised service account to rewrite all 35 tags in the Checkmarx KICS GitHub Action and publish malicious VS Code extensions to the Open VSX Registry. Security researchers confirmed with high confidence it was the same actor, based on identical encryption keys and shared exfiltration methodology.
4. Backdoor the AI Gateway (March 24)
The final known phase targeted LiteLLM, a widely adopted Python AI gateway proxy present in many cloud environments. TeamPCP bypassed LiteLLM’s official CI/CD workflows and published malicious versions directly to PyPI. The attack vector: a .pth file (a Python path configuration file that the interpreter loads automatically at startup) — no explicit import required.
The payload harvested credentials, attempted Kubernetes lateral movement via privileged pods, and installed a persistent backdoor. According to LiteLLM’s security disclosure, the malicious packages were available for several hours before being quarantined.
Why This Worked
Multiple factors aligned to make this campaign possible:
- Mutable tags — Git tags can be silently rewritten. Pipelines that reference v1.2.3 instead of a commit SHA trust that the tag hasn’t moved.
- Secrets in workflows — pull_request_target workflows with secrets access expose credentials to any external contributor who opens a PR.
- Over-permissioned service accounts — The aqua-bot PAT had write access to tags across multiple repositories. A token scoped to CI builds should never be able to rewrite release tags. This is a distinct problem from secrets exposure — even if the token had been stolen through a different vector, the blast radius was determined by its permissions.
- Cached tokens — npm and PyPI tokens stored on developer machines become lateral movement vectors when those machines are compromised.
- Normal-looking behavior — the malicious code executed the legitimate tool afterward, so pipelines appeared to succeed. No failed builds. No red flags in CI logs.
- No cross-platform correlation — Each phase of the attack touched a different platform: GitHub, Docker Hub, npm, PyPI, Kubernetes. Organizations monitoring these systems in isolation saw individual events that looked plausible. The pattern only becomes visible when you correlate identity behavior across platforms and time windows.
The compromised tools — Trivy, KICS, LiteLLM — are security and AI infrastructure used across enterprises. This was a credential harvesting operation targeting the highest-value secrets in an organization’s CI/CD estate. Many affected organizations may not yet know they were impacted.
Indicators of Compromise
Detecting Each Phase
Detecting this class of attack requires monitoring CI/CD activity with the same rigor applied to cloud infrastructure. Each phase — on its own — can look like normal developer behavior. It’s only when you correlate events across identities, repositories, and time windows that the pattern becomes visible.
Below is concrete detection logic for every stage of the TeamPCP kill chain.
Phase 1: Workflow Abuse and Initial Credential Theft
What happened: An external PR triggered a pull_request_target workflow that passed secrets to the job, exposing the aqua-bot PAT.
What to detect:
- pull_request_target workflows executing with secrets access. Query your GitHub audit logs for workflow runs where event = pull_request_target and secrets were passed to the job context. Flag any run triggered by a contributor outside your organization. This is the single highest-signal detection for this class of attack — if an external PR triggers a workflow with secrets, you need to know immediately.
- First-time contributors triggering workflow runs. A PR from an account with no prior activity in the repository that triggers a workflow run is a leading indicator. The contributor may be legitimate, but the combination of “new actor + workflow execution” warrants review.
Phase 2: Dormant Identity Revival and Tag Manipulation
What happened: Weeks after the initial theft, the stolen aqua-bot PAT was used to force-push release tags across trivy-action. A service account that normally runs CI builds was suddenly rewriting release infrastructure.
What to detect:
- Dormant identities performing privileged operations. Baseline each identity’s activity over a 30-day window. When an identity with no recent workflow activity suddenly executes workflows — especially workflows that access secrets — treat it as a high-priority alert. For example: a service account that normally only triggers CI builds suddenly pushing new release tags across multiple repositories.
- Anomalous scope of repository access. When a CI identity or bot account accesses significantly more repositories than its historical baseline in a short time window, that’s a credential abuse signal. Track the distinct repository count per identity per day and alert on spikes.
- Tag rewrites on release repositories. Git tags are mutable, but rewriting them is rare in normal operations. Monitor git.push events where ref_type = tag and the tag already existed — especially across multiple tags in a short window.
Phase 3: Credential Staging
What happened: TeamPCP harvested tokens from compromised CI environments and developer machines, then used them to publish malicious packages and rewrite additional GitHub Actions.
What to detect:
- Bulk token creation. Three or more personal access tokens created by a single identity within one hour is a strong indicator of credential staging for lateral movement. This applies to PATs, deploy keys, and OAuth app authorizations.
- Actions secrets modified from unfamiliar infrastructure. When secrets are created or updated in a repository, check the source IP against known organizational infrastructure. Modifications from unfamiliar IPs, VPNs, or flagged ranges indicate an attacker injecting credentials into the pipeline.
- OAuth or GitHub App authorizations with critical scopes. Tokens authorized with repo, admin:org, or write:packages scopes from unexpected contexts are a credential escalation signal.
Phase 4: Downstream Exploitation (Kubernetes Lateral Movement)
What happened: The LiteLLM backdoor attempted Kubernetes lateral movement via privileged pods and ServiceAccount token creation.
What to detect:
- ServiceAccount token requests from unexpected workloads. In EKS/GKE/AKS, monitor the TokenRequest API for service account tokens created by pods or identities that don’t normally request them. A compromised application container requesting a new ServiceAccount token is a pivot indicator.
- Privileged pod creation or host path mounts. Pods created with privileged: true, hostPID: true, or volume mounts to the host filesystem are container escape attempts. These are rare in normal operations and should generate immediate alerts.
- New DaemonSets or CronJobs. DaemonSets deploy to every node in a cluster — an attacker creating one achieves cluster-wide persistence in a single operation. CronJobs provide scheduled persistence. Both are high-signal when created outside normal deployment pipelines.
Phase 5: Cross-Platform Correlation
The phases above detect individual events. The real power is connecting them.
The TeamPCP campaign created a specific temporal pattern: workflow abuse (Day 0) → credential theft (Day 0) → dormant account revival (Day 21) → tag rewrites across repos (Day 21) → npm token abuse (Day 22) → additional GitHub Action compromise (Day 24) → PyPI package compromise (Day 25) → Kubernetes lateral movement (Day 25).
An organization monitoring only GitHub would see a suspicious workflow run and some tag changes. An organization monitoring only Kubernetes would see an anomalous pod. Neither would see the campaign.
Detection that correlates identity behavior across GitHub, container registries, package managers, and cloud infrastructure — tracking how a single compromised credential cascades through the estate — is what turns five isolated alerts into one investigation.
The AI Accelerant
There's a reason the final link in this chain was an AI gateway.
The compromise was discovered when a researcher at FutureSearch was testing a Cursor MCP plugin that pulled LiteLLM as a transitive dependency. A bug in the malware crashed the machine, an accidental side effect that made the backdoor visible. No human had reviewed the package. No developer made a conscious decision to install that version. An AI tool resolved a dependency, and the backdoor was in.
AI-assisted development is accelerating every phase of the software lifecycle: writing code, resolving dependencies, configuring pipelines, publishing packages. That speed is transformative. But it also means the trust chain that TeamPCP exploited is getting longer, faster, and less visible to human eyes. AI coding assistants pull packages without developer review of each transitive inclusion. AI development tools themselves become high-value targets. Compromising LiteLLM means compromising the AI layer itself.
TeamPCP's campaign took less than a week to cascade through five ecosystems with human-speed tooling. In a world where AI agents handle dependency resolution and deployment autonomously, the next campaign could move faster. The organizations best prepared aren't the ones slowing down AI adoption. They're the ones building detection that can match the speed.
What other pipelines in your stack have the same profile?
If you want help building this kind of cross-platform detection for your environment, reach out to us.
LAST UPDATED:
March 25, 2026