KI
KIneAngst
All News
🔴 Serious concern

DC Circuit lets Pentagon blacklist of Anthropic stand - AI firm refused to drop guardrails on surveillance and autonomous weapons

Source: CNBC / Reuters / Axios·April 8, 2026

What it really says

A three-judge panel of the U.S. Court of Appeals for the D.C. Circuit (judges Karen LeCraft Henderson, Gregory G. Katsas and Neomi Rao) on 8 April 2026 denied Anthropic's emergency motion to stay the Pentagon's blacklisting while litigation continues. Background: Defense Secretary Pete Hegseth designated Anthropic a 'supply-chain risk' in February 2026 - a label previously reserved for foreign adversaries - after the company refused to remove two usage restrictions on Claude: first, a ban on using Claude for mass domestic surveillance of US citizens; second, a ban on integrating Claude into fully autonomous weapons systems without meaningful human control. The Pentagon had demanded an 'any lawful use' clause instead. The court wrote: 'The equitable balance here cuts in favor of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.' At the same time, the court acknowledged that Anthropic 'will likely suffer some degree of irreparable harm'. Oral arguments were fast-tracked to 19 May 2026. Crucially, a California federal judge reached the opposite conclusion on 26 March, ruling the blacklist was likely First Amendment retaliation for Anthropic's public advocacy on AI safety. The conflicting rulings mean Anthropic is currently barred from Pentagon contracts but may continue working with other federal agencies.

Our assessment

This case sets a precedent for whether governments can compel AI companies to remove safety guardrails - or whether such guardrails constitute protected speech. Both sides have legitimate arguments: the Pentagon needs capable AI for national security and does not want vendors dictating what the government can do with its technology. Anthropic argues that current AI models are not reliable enough for autonomous weapons and that mass domestic surveillance violates fundamental rights. The 'supply-chain risk' designation - a category previously used only against China and Russia - being applied to a US company is unprecedented and is characterised by the ACLU and CDT as government retaliation for disfavoured speech. The conflicting rulings (DC Circuit vs. California) make Supreme Court review likely. The case shows that the question of who defines the limits of AI deployment - companies, courts or governments - is not academic but is being litigated right now.

Relevance for Germany

For Germany and the EU, the case matters in three ways. First: the EU just enacted the AI Act, which bans certain uses including social scoring and real-time biometric identification in public spaces. If the US government simultaneously pressures AI companies to drop safety guardrails, a transatlantic contradiction emerges: European companies using Claude must comply with the AI Act while the same product may be cleared for surveillance in the US. Second: the case indirectly validates the AI Act's logic - legislative guardrails are necessary because voluntary corporate policies can buckle under political pressure. Third: German enterprises using Claude via AWS or Anthropic directly should watch closely: extending the blacklist to all US government contracts could affect Anthropic's business model and therefore its reliability as a provider for European customers.

Fact check

The core facts - denial of the stay motion by a three-judge DC Circuit panel on 8 April 2026, the named judges, the verbatim 'equitable balance' quote, the fast-tracked oral arguments on 19 May 2026 and the conflicting California ruling of 26 March - are consistently reported by CNBC, Axios, Reuters, Bloomberg and the legal trade press (JDSupra, Law.com). Anthropic's 'red lines' - bans on mass surveillance and fully autonomous weapons - are documented in Anthropic's own complaint and published usage policies. The FASCSA supply-chain risk designation is publicly traceable in the Federal Register. The Pentagon's 'any lawful use' formulation is cited by CNBC and NPR. Anthropic is the first US company ever designated as a supply-chain risk; this is confirmed by CNBC and Law.com.

Source

  • CNBC 08.04.2026 (initial ruling report)
  • Axios 08.04.2026
  • Reuters 09.04.2026
  • Bloomberg 08.04.2026
  • CCIA statement 09.04.2026
  • JDSupra / Kilpatrick 09.04.2026 (legal analysis of FASCSA designation)
  • CNBC 26.03.2026 (California First Amendment ruling)
  • ACLU / CDT amicus brief (aclu.org)
Share:
RegulierungGrundrechteUeberwachungAutonomieUSAGovernanceSicherheit