TL;DR
- Legislative Push: Senate Democrats introduced bills to ban autonomous AI weapons and mass domestic surveillance by the U.S. military.
- Catalyst: The Trump administration blacklisted Anthropic as a supply-chain risk after the company refused to let the Pentagon use its AI without safety limits.
- Industry Shift: OpenAI adopted similar red lines in its own Pentagon contract, reinforcing the case for binding federal restrictions.
- Political Reality: Democrats hold a minority in both chambers, making bipartisan support essential for either bill to advance.
Senate Democrats announced legislation to ban the U.S. military from deploying AI for autonomous weapons and mass domestic surveillance, seeking to write Anthropic’s contested safety red lines into federal law. Earlier in March 2026, the Trump administration blacklisted Anthropic after the AI company set limits on how the military could use its models, designating it a supply-chain risk. Anthropic has since asked a federal court to pause the designation, warning it could lose billions in business without relief.
Still weeks from formal introduction, this legislative effort represents the first attempt to convert voluntary corporate AI safety commitments into binding federal law. If successful, neither the Pentagon nor AI companies could unilaterally waive restrictions on autonomous weapons and mass domestic surveillance. Sen. Adam Schiff (D-CA) said he would have far more confidence in “statutory requirements” than in relying on the lawfulness of the Pentagon or the word of an AI CEO.
What the Bills Would Restrict
Schiff’s bill is one of two Democratic proposals targeting military AI use. Sen. Elissa Slotkin (D-MI) introduced the AI Guardrails Act on March 17, which establishes three core prohibitions: no firing autonomous weapons to kill without human authorization, no AI-driven surveillance of Americans, and no AI involvement in nuclear weapons launches. Under the legislation, the Defense Secretary can notify Congress if extraordinary circumstances necessitate overriding those limits, and safeguards would apply across the full AI lifecycle, from development through post-deployment monitoring.
Schiff’s separate proposal, still in the drafting stage, draws on existing Biden-era frameworks to define what constitutes an autonomous weapon or domestic surveillance. Schiff spokesperson Ruby Robles Perez said the office continues to consult with stakeholders and industry leaders before finalizing it.
Schiff is considering attaching his proposal to the National Defense Authorization Act, a must-pass defense spending bill that commonly carries policy riders, a practical necessity for a minority party with limited control over the legislative calendar. A formal unveiling could come within one to two weeks.
Both proposals share a core principle: life-and-death decisions must not be delegated to algorithms.
“Whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We don’t want to delegate that kind of responsibility over life and death to an algorithm.”
Adam Schiff, U.S. Senator (via The Verge)
The Dispute Behind the Legislation
A rapid escalation between Anthropic and the Pentagon set the stage for this legislative push. Anthropic resisted a deal that competitor OpenAI signed with the Pentagon, insisting the military avoid using its products for fully autonomous weapons and mass domestic surveillance. In a February 2026 blog post, Anthropic stated that its two major red lines were no Claude AI for autonomous weapons and no mass surveillance of United States citizens.
In response, the Pentagon designated Anthropic a supply-chain risk, a label that Cornell Law professor Michael C. Dorf noted has been historically reserved for US adversaries and not previously applied to an American company. Dorf’s legal analysis argued that the Pentagon’s refusal to accept Anthropic’s carveouts implies the government may intend to conduct mass surveillance and deploy autonomous weapons, since the carveouts would have been redundant if those uses were not contemplated.
A January 2026 Pentagon AI memo had already removed the requirement for operators of autonomous weapons systems to exercise appropriate levels of human judgment over the use of force, prioritizing “Military AI Dominance” instead. Human Rights Watch warned that rejecting Anthropic’s ethical red lines signaled the Pentagon is unlikely to uphold meaningful safeguards on weapons development voluntarily.
According to HRW, autonomous weapons risk placing civilians in grave danger because such systems would struggle to distinguish between civilians and combatants during armed conflict.
Removal of human judgment requirements and the blacklisting of a company that demanded such safeguards create the backdrop against which Schiff and Slotkin are legislating. For the senators, voluntary corporate commitments and executive branch discretion have both proven unreliable.
Schiff called the blacklisting a “hostile, dictatorial kind of an act” that could set back America’s AI leadership, characterizing Anthropic as one of the country’s preeminent AI companies being punished for insisting on policies that the majority of Americans support.
During a March 24 hearing, U.S. District Judge Lin questioned the DOD on whether Anthropic was being punished, asking if the department had violated the law. Lin pressed the government on what appeared to be a low bar for the supply-chain risk designation, suggesting the action may have been retaliatory rather than based on legitimate security concerns.
Without a court order pausing the designation, Anthropic has said it could lose billions of dollars in business as government contractors sever ties with the company, underscoring the financial stakes of the broader dispute over who sets the rules for military AI.
OpenAI’s Parallel Response
Anthropic’s stand has reshaped how its competitors approach Pentagon contracts. OpenAI, which reached an agreement with the Pentagon for classified AI deployment in late February 2026, moved to defend its Pentagon deal publicly after facing backlash. In a blog post titled “Our Agreement with the Department of War,” the company outlined three red lines of its own: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decisions like social credit systems.
OpenAI claimed its contract included more guardrails than any previous Pentagon AI agreement, including Anthropic’s.
OpenAI also publicly stated that it does not believe Anthropic should be designated a supply-chain risk, and has communicated this position to the government. Sam Altman previously admitted that OpenAI’s original Pentagon deal was “opportunistic and sloppy,” and the company updated its contract language on March 2 to add an explicit prohibition against domestic surveillance of U.S. persons, including through commercially acquired personal information.
Under the revised terms, any NSA access would require a separate agreement. OpenAI’s cloud-only deployment model for Pentagon work, which the company says prevents use for powering fully autonomous weapons, represents a technical constraint rather than a legal one.
Convergence between OpenAI’s voluntary commitments and Anthropic’s original demands underscores Schiff’s argument that voluntary pledges are insufficient without legal backing. Legislation would make such restrictions binding regardless of deployment architecture, closing a gap that corporate policies alone cannot guarantee.
Both companies’ red lines now mirror what Schiff and Slotkin seek to codify, suggesting a growing industry consensus on the boundaries of acceptable military AI use, even as enforcement remains unresolved.
Political Prospects
Democrats hold a minority in both chambers of Congress, meaning both bills’ short-term success depends on whether Republicans are willing to support measures that could be read as criticizing the Trump administration. Schiff acknowledged the political difficulty, noting that some colleagues may view the legislation as implicit criticism of the administration, but expressed hope for bipartisan support given broad public backing for AI safety guardrails.
Before introducing her bill, Slotkin pressed Trump administration nominees on their plans for AI use in a Senate Armed Services Committee hearing, signaling that Democrats intend to keep military AI oversight on the broader congressional agenda regardless of whether either bill advances in the near term.
With midterm elections approaching, the window for legislative action narrows further. If Democrats regain one or both chambers in the midterms, the balance of power could shift in the legislation’s favor, but passage before then would require crossing party lines on an issue the administration has made personal through its blacklisting of Anthropic.

