TL;DR
- Court Ruling: A federal judge ordered the Trump administration to rescind Anthropic’s supply chain risk label and lift its directive for agencies to cut ties.
- Key Finding: Judge Rita F. Lin ruled the government’s campaign violated Anthropic’s constitutional rights and was designed to punish, not protect national security.
- Origin of Dispute: The conflict began after Anthropic refused to remove ethical guardrails on military AI use, including bans on autonomous weapons and mass surveillance.
- Industry Impact: The case exposed a structural tension between AI safety principles and dependence on defense funding, drawing broad industry support for Anthropic’s stance.
Anthropic secured a federal court injunction on March 26 ordering the Trump administration to rescind its designation of the AI company as a supply chain risk and to lift its parallel order directing federal agencies to cut ties with the company. Judge Rita F. Lin ruled that the government’s campaign violated Anthropic’s constitutional rights.
Judge Lin found the government’s actions flouted free speech protections and called them “an attempt to cripple Anthropic.” Her ruling marks the first time a federal court has blocked the government from using a supply chain risk designation normally reserved for foreign actors against a domestic company. Anthropic warned it stood to lose billions in business without the injunction.
How the Dispute Escalated
Anthropic’s conflict with the administration traces back to its insistence on setting ethical guardrails for government use of its AI models. In November 2024, Anthropic partnered with Palantir and AWS for defense AI but drew firm lines against autonomous weapons systems and mass surveillance.
Nevertheless, its Claude Gov models were integrated into Palantir’s Project Maven for data analysis, yet the company maintained restrictions on how far military applications could go. By early 2026, Anthropic was the only AI firm deployed in classified military systems, giving it a unique position in the national security sector.
Meanwhile, Pentagon leadership had been pushing AI companies to provide unrestricted AI access to classified networks since early February 2026. Defense Secretary Pete Hegseth gave Anthropic an ultimatum to comply with Pentagon requirements in late February, demanding unfettered military access to its Claude AI models.
Hegseth had previously declared that the Pentagon would not employ AI models restricting its ability to fight wars, and warned against what he described as ideological interference from Silicon Valley compromising national survival.
When Anthropic refused, events escalated rapidly. On March 3, the military canceled Anthropic’s contract. OpenAI signed its own Pentagon deal just one day later, filling the gap. OpenAI had dropped its own “no military use” policy in 2024, positioning itself as a willing alternative at the precise moment Anthropic was pushed out.
As the standoff intensified, on March 5 the Pentagon formally labeled Anthropic a supply chain risk. CEO Dario Amodei called the government’s actions “retaliatory and punitive.” President Trump then ordered federal agencies to cut ties with Anthropic entirely.
In response, Hegseth went further on social media, writing that no contractor or supplier doing business with the military could conduct any commercial activity with Anthropic. The White House branded Anthropic “a radical-left, woke company” jeopardizing national security.
Consequently, Anthropic sued the Defense Department and Secretary Hegseth on March 9, arguing the government had violated its constitutional rights. Amodei wrote at the time that the designation applied only to customers working on direct military contracts.
Despite the blacklisting, Claude topped App Store charts after the contract cancellation, suggesting public sympathy for Anthropic’s stance.
What the Judge Found
During the March 24 hearing, Judge Lin delivered a pointed rebuke of the government’s rationale. She questioned whether the DOD violated the law and suggested a company could be designated a supply chain risk for being “stubborn” under the government’s reasoning.
In its defense, the government argued Anthropic posed a risk of future sabotage and worried about a “kill switch” that could let the company disable software already running on military systems. Judge Lin found that argument unconvincing: the designation was not tailored to any legitimate national security concern.
Moreover, during trial the government itself conceded that the label did not prevent defense contractors from using Anthropic for non-military work. That admission further undercut the stated security rationale.
She described the government’s actions as “troubling” and not proportionate to the threat it claimed to address. Anthropic’s attorney countered that the company has no ability to shut off or surveil software once deployed to military systems.
Pentagon CTO Emil Michael had called Amodei’s stance a “God-complex,” claiming the CEO wanted nothing more than to personally control the military. However, Judge Lin was unmoved. Her findings made clear that punishment, not protection, drove the government’s campaign against a company that had simply insisted on its own usage policies.
“If the worry is about the integrity of the operational chain of command, DOW could just stop using Claude. It looks like defendants went further than that because they were trying to punish Anthropic. One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic.”
Judge Rita F. Lin, Federal Judge, Northern District of California (via CBS News)
Wider Stakes for AI Governance
The ruling drew immediate alarm from observers across the political spectrum. Senator Adam Schiff said he was alarmed to see the Pentagon take aim at a company that was simply insisting on policies the vast majority of Americans agree with, and called the attempt to shut it down a hostile and dictatorial act.
In broader terms, his reaction underscored how the case moved well beyond a contractual dispute into questions about government power over private enterprise. Beyond Anthropic, the ruling exposes a structural tension in AI development.
Specifically, the bulk of public AI funding comes from defense, leaving AI companies with an uncomfortable choice between ethical principles and financial survival. Hamid Ekbia, founding director of Syracuse University’s Academic Alliance for AI Policy, put it plainly:
“With the bulk of public AI funding in the U.S. still coming from defense, companies either have to budge or shut themselves out from this unique source of money.”
Hamid Ekbia, Founding Director, Academic Alliance for AI Policy, Syracuse University (via Syracuse University Today)
Industry support for Anthropic proved broad. Thirty-seven employees from OpenAI and Google DeepMind signed an amicus brief backing Anthropic, alongside nearly 1,000 who signed a public letter.
Furthermore, over 100 Google staff members separately penned a letter to management calling for red lines in government contracts similar to those Anthropic had pursued. AI governance researcher Robert Trager noted the case invites reflection on what kind of relations citizens want between the government and companies.
Building on this momentum, Senator Schiff is also working on legislation to codify restrictions on autonomous weapons and mass surveillance directly into the terms of government AI contracts. Separately, Anthropic pledged $20 million to Public First Action, a political action committee supporting candidates in favor of AI regulation.
At stake is more than principle. For Anthropic, enterprise customers represent the bulk of its business, and Anthropic’s rivalry for federal AI adoption made the supply chain risk label threatening not just Pentagon revenue but the confidence of commercial customers who might fear association with a blacklisted firm.
Looking ahead, Anthropic welcomed the ruling, stating the court agrees it is likely to succeed on the merits of its case. Still, the injunction is preliminary, meaning the full trial lies ahead and the administration could appeal.
Amodei has vowed to maintain Anthropic’s two red lines: banning mass surveillance of Americans and fully autonomous weapons without human oversight, regardless of the trial outcome. Whether the ruling holds will determine not just Anthropic’s future but how much power the government can exert over AI companies that refuse to abandon their safety commitments.

