On April 8, 2026, a three-judge panel at the US Court of Appeals for the District of Columbia Circuit denied Anthropic’s emergency request to pause the Pentagon’s decision to label it a national security threat. The ruling allows the Department of War’s supply chain risk designation to remain in force: defence contractors working on military projects cannot use Anthropic’s Claude models, and those already using Claude must certify they have removed it from Pentagon-related work. One week earlier, a different federal judge in San Francisco had ruled the opposite way — blocking Trump’s broader ban on Claude across all federal agencies. Two courts, two decisions, one company caught in an unprecedented legal dispute that will define how the US government controls AI safety policy.
Direct Answer: What happened with Anthropic and the Pentagon blacklisting? The Pentagon designated Anthropic a “supply chain risk” — a label typically reserved for foreign intelligence threats — in early March 2026 after Anthropic refused to grant the Department of War unrestricted access to Claude. President Trump simultaneously ordered all federal agencies to stop using Claude. On March 26, San Francisco federal judge Rita Lin issued a preliminary injunction blocking Trump’s federal-wide Claude ban, finding the evidence suggested “First Amendment retaliation.” But on April 8, a separate federal appeals court in Washington DC denied Anthropic’s request to pause the Pentagon-specific supply chain designation. The result: Anthropic can work with most government agencies but is excluded from Pentagon contracts while both cases continue through the courts. The DC Circuit will hear arguments on the merits in May 2026.
How This Started: The $200 Million Contract Dispute
The conflict traces back to mid-2025, when Anthropic signed a $200 million contract with the Pentagon and became the first AI company to deploy its models across the Department of Defence’s classified networks. Claude was running on intelligence analysis tasks, integrated with national nuclear laboratories, and embedded with contractors like Palantir.
Then contract renewal negotiations in early 2026 broke down over a single phrase.
The DOD wanted Anthropic to grant the Pentagon access to its models across “all lawful purposes.” Anthropic refused. The company’s position: that language was too broad. It could, Anthropic argued, authorise Claude for fully autonomous lethal weapons systems operating without human oversight, and for mass surveillance of American citizens — both of which Anthropic’s usage policy explicitly prohibits.
The government’s response, as summarised in court filings, was that it had no current plans to use Claude for those purposes. Anthropic’s counter: vague authorisation language creates risk regardless of stated intentions, and the company’s safety commitments are non-negotiable.
When the negotiations failed, the Trump administration’s response was swift and punitive. The Trump administration labelled the AI company a supply chain risk and ordered federal agents to stop using Claude in February, after the company refused to allow unrestricted military access to its model.
The Unprecedented Legal Designation
The supply chain risk designation — applied under a federal statute designed to protect military systems from foreign threats — had never previously been applied to a US company. This marks the first known instance in which the supply-chain risk designation — typically reserved for terrorists, foreign intelligence services, or hostile foreign actors — has been applied to a US-based company.
Anthropic’s lawyer, Michael Mongan, put it plainly in court: “This is something that has never been done with respect to an American company. It is a very narrow authority. It doesn’t apply here.”
The government’s stated rationale, argued by Department of Justice lawyer Eric Hamilton: the DOD had “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems.” Hamilton invoked the scenario of a “kill switch” — what happens, he asked, if Anthropic installs functionality that changes how Claude behaves in military systems during a critical mission?
San Francisco judge Rita Lin, handling the related civil case, was pointed in her response to this reasoning: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The Two Court Decisions: A Split That Leaves Anthropic in Limbo
San Francisco (March 26) — Anthropic wins a partial block:
Judge Rita Lin granted a preliminary injunction against Trump’s executive directive banning all federal agencies from using Claude. Her reasoning centred on First Amendment retaliation: the evidence suggested the administration penalised Anthropic specifically because the company publicly criticised the government’s contracting demands. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote.
This ruling means non-DOD federal agencies — FEMA, NASA, civilian intelligence, regulatory bodies — can continue using Claude. The injunction does not affect the Pentagon’s specific supply chain designation, which is being litigated in a separate case.
Washington DC (April 8) — Pentagon’s blacklist stands:
The DC Circuit panel declined to issue a stay, keeping the Pentagon’s supply chain designation in effect. The panel’s key reasoning: “In our view, the equitable balance here cuts in favor of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.”
The panel acknowledged that Anthropic would “likely suffer some degree of irreparable harm” but characterised the company’s interests as “primarily financial in nature.” On free speech grounds, the court found Anthropic had not shown its speech had been “chilled” during litigation.
The practical result: Anthropic is excluded from DOD contracts and defence contractor work involving the military. It can work with civilian government agencies while the San Francisco injunction holds. This creates a patchwork legal status that will remain until the courts rule on the merits — and the DC Circuit scheduled arguments for May 2026.
What Anthropic Will and Won’t Allow Claude to Do
The core dispute is about Anthropic’s usage policy — specifically two categories of military application that Anthropic prohibits:
Category 1: Fully autonomous lethal weapons without human oversight. Anthropic’s policy requires meaningful human control over any lethal decision made using Claude. The company has stated it believes AI systems are not yet safe for deployment in systems that can kill without a human in the decision loop.
Category 2: Mass surveillance of Americans. Claude’s usage policy prohibits deployment in domestic mass surveillance programmes — monitoring Americans at population scale without individualised legal process.
The government’s position, stated in court, is that it does not currently use AI for either of these purposes. Anthropic’s position: broad contract language authorising “all lawful purposes” creates a permissive framework that could enable these uses in the future without requiring a new agreement, and Anthropic’s safety commitments cannot be contingent on government assurances about current intent.
This is not primarily a legal dispute. It is a philosophical dispute about whether private AI companies have the right to set binding safety limits on how their models are used by the government — or whether the government, once it has paid for access, decides how the tool is used.
The Broader Implications for AI Safety Policy
The Anthropic-Pentagon dispute has become the most visible test case for a question the entire AI industry is watching: who controls AI safety guardrails when the government is the customer?
The precedent if Anthropic loses: Government contracts become a mechanism for overriding AI company safety policies. Any AI company that wants federal business must grant broad usage rights. Companies that maintain safety restrictions face blacklisting.
The precedent if Anthropic wins: AI companies retain the right to set binding usage restrictions even for government customers. The government cannot designate a company a national security threat simply for enforcing its safety policies.
The market impact so far: The blacklisting has not caused catastrophic financial damage to Anthropic — the company’s $30 billion ARR and commercial momentum were not primarily driven by Pentagon contracts. But the reputational and precedent-setting stakes are enormous. Other AI companies — OpenAI, Google DeepMind, Meta — are watching closely. Each has their own government relationships and their own safety policies that could be challenged in the same way.
OpenAI’s positioning: OpenAI has historically been more willing to accommodate government requests. The company offers GPT models without Anthropic’s strict autonomous weapons and surveillance prohibitions. This dispute may accelerate the Pentagon’s transition toward OpenAI and other less restrictive vendors — which is, arguably, one intended outcome of the blacklisting.
The Chinese AI Distillation Connection
One additional dimension emerged in recent weeks that adds geopolitical context to the dispute.
Multiple US AI companies including Anthropic, OpenAI, and Google have been sharing intelligence with US government officials about Chinese firms allegedly using “distillation” techniques to extract capabilities from American AI models — making large-scale requests to reverse-engineer model behaviour without paying for legitimate API access. Anthropic has specifically identified three Chinese AI labs it says have engaged in this practice and has blocked them.
This context matters for the Pentagon dispute: Anthropic is simultaneously fighting to prevent the US government from using Claude for purposes it considers unsafe, while cooperating with the government on Chinese IP theft concerns. The company is not anti-government — it is specifically pro-safety-constraints. The distinction matters for how courts will ultimately rule on whether the blacklisting constitutes First Amendment retaliation.
What Happens Next
May 2026: The DC Circuit Court of Appeals hears arguments on the merits of Anthropic’s challenge to the supply chain designation. This will be the first substantive hearing on whether the designation was lawful — not just whether to pause it.
Ongoing: The San Francisco case (blocking Trump’s federal-wide Claude ban) continues separately. Judge Lin’s preliminary injunction remains in force for non-DOD agencies.
Potential outcomes:
- DC Circuit rules designation unlawful → Pentagon must lift blacklist, Anthropic restored to defence contracts
- DC Circuit upholds designation → Anthropic excluded from DOD work permanently; case may reach Supreme Court
- Settlement → Anthropic and DOD reach a new contract framework with mutually acceptable usage terms. This was always the most likely outcome if the government’s goal is actually to have access to Claude rather than to punish Anthropic.
Acting Attorney General Todd Blanche described the April 8 ruling as a “resounding victory for military readiness.” Anthropic responded that it remains “confident the courts will ultimately agree that these supply chain designations were unlawful.”
FAQ
Why did the Pentagon blacklist Anthropic? After Anthropic refused to grant the Department of War broad rights to use Claude “for all lawful purposes” — language Anthropic said could enable autonomous weapons and mass surveillance — the Trump administration designated Anthropic a supply chain risk (a designation normally reserved for foreign threats) and ordered agencies to stop using Claude.
Can government agencies still use Claude after the blacklisting? Partially. A San Francisco judge issued a preliminary injunction blocking Trump’s executive ban on Claude across all federal agencies. Non-DOD government agencies can still use Claude under that injunction. However, the Pentagon’s specific supply chain designation stands — defence contractors cannot use Claude on military contracts.
What does “supply chain risk” designation mean? It is a legal label applied under federal law to entities deemed to threaten the integrity of military IT systems. Normally applied to foreign adversaries and state-sponsored threat actors, it requires defence contractors to certify they are not using the designated entity’s technology in Pentagon work. First time it has been applied to a US company.
Did Anthropic lose its government contracts? Pentagon contracts specifically, yes. The $200 million Pentagon contract is effectively suspended. Non-military government work can continue while the San Francisco injunction holds. The DC Circuit will hear the core legal question in May.
What is Anthropic’s position on autonomous weapons? Anthropic’s usage policy prohibits deploying Claude in fully autonomous lethal weapons systems operating without meaningful human oversight, and in mass surveillance of Americans. The company has stated it believes current AI systems, including Claude, are not safe enough for autonomous lethal deployment.
Related Articles
- Claude Mythos: The AI Too Dangerous to Release — Project Glasswing
- Anthropic Overtakes OpenAI: $30B ARR and the IPO Race Explained
- OpenAI’s Child Safety Blueprint: What It Proposes and What Critics Say Is Missing
- Big Tech Is Buying Nuclear Power for AI — Microsoft, Google, Amazon Bet on Reactors
Sources & Further Reading
- MIT Technology Review — AI Section — In-depth coverage of AI research and industry trends
- arXiv AI Papers — Pre-print research papers on AI and machine learning
- EFF on AI — Civil liberties perspective on AI policy