KI
KIneAngst
All News
🔴 Serious concern

Pentagon signs AI deals with eight tech firms for classified military networks - Anthropic excluded over safety concerns

What it really says

The US Department of Defense announced agreements with eight technology companies on May 1, 2026, to deploy their AI systems on the military's most classified networks. The companies - Amazon Web Services, Google, Microsoft, OpenAI, SpaceX, NVIDIA, Reflection, and Oracle - may now operate their AI models on Impact Level 6 (secret) and Impact Level 7 (top secret) networks. The Pentagon described the agreements as accelerating the transformation toward establishing the US military as an 'AI-first fighting force'. The tools will be available via GenAI.mil, which has been used by 1.3 million Defense Department personnel after five months of operation. Notably absent: Anthropic. The company was designated a 'supply-chain risk to national security' by Defense Secretary Pete Hegseth in February 2026 - the first such designation ever applied to an American company. The reason: Anthropic's refusal to accept a clause permitting the Pentagon to use its Claude models for 'any lawful purpose', including fully autonomous weapon systems and mass domestic surveillance. Anthropic drew two red lines: no mass surveillance of American citizens and no fully autonomous weapons without human involvement. CEO Dario Amodei stated that frontier AI systems are 'simply not reliable enough to power fully autonomous weapons'. A federal court issued a preliminary injunction in Anthropic's favor on March 26. On April 17, Trump's chief of staff Susie Wiles and other officials met with Amodei at the White House. Trump subsequently told CNBC that a deal was 'possible'.

Our assessment

This development warrants serious concern. For the first time, the US military is integrating eight commercial AI systems simultaneously into its most classified networks - where mission planning, intelligence analysis, and weapons targeting take place. The 'any lawful purpose' clause sounds innocuous but is the core of the conflict: the Pentagon wants maximum flexibility in deploying AI without committing to restrictions in advance. The fact that seven companies accepted these terms while Anthropic alone refused and was punished for it sends a troubling signal to the entire AI industry. It creates an incentive to set aside safety concerns in favor of government contracts. At the same time, Anthropic's stance deserves nuanced consideration: the company did not reject all military use - it had a $200 million Pentagon contract and worked closely with the military through Palantir's Maven toolkit. It drew only two specific red lines. That precisely these lines became problematic raises questions about the actual intended use cases. OpenAI demonstratively renamed its military partnership a 'Department of War' agreement - a historical throwback that underscores the increasing normalization of military AI deployment.

Relevance for Germany

Highly relevant for Germany and Europe for several reasons. First, as a NATO partner, Germany is directly affected by US military strategy. If the US transforms its armed forces into an 'AI-first fighting force', pressure mounts on allies to take similar steps - with all the ethical questions that entails. Second, Anthropic's designation as a security risk demonstrates how quickly a company that insists on safety standards can be politically isolated. A German SPD digital policy expert has already called for bringing Anthropic to Europe as an opportunity for greater digital sovereignty. Third, the EU AI Act prohibits certain AI applications, including biometric mass surveillance. Whether European AI companies could face similar political pressure to weaken safety standards is not a theoretical question. Fourth, the Bundeswehr is increasingly using AI systems. The question of what guardrails should apply to military AI is becoming more relevant in Germany - especially amid the current rearmament debate.

Fact check

Core facts - eight companies, Impact Level 6 and 7, GenAI.mil with 1.3 million users - are consistently reported by Washington Post, TechCrunch, CNN, Breaking Defense, and Nextgov. Anthropic's designation as a 'supply-chain risk' is documented through Defense Secretary Hegseth's official directive and the Congressional Research Service (CRS Report IN12669). The preliminary injunction in Anthropic's favor from March 26 is part of the court record. The Amodei-Wiles meeting on April 17 and Trump's CNBC statement are confirmed by multiple independent sources. Initially seven companies were listed; Oracle was added hours later, bringing the total to eight. Caveat: The exact contract terms and whether the other companies also signed an 'any lawful purpose' clause are not publicly available.

Source

  • Washington Post 01.05.2026 (washingtonpost.com/technology/2026/05/01/pentagon-ai-deals-microsoft-amazon-google-classified-military/)
  • TechCrunch 01.05.2026 (techcrunch.com/2026/05/01/pentagon-inks-deals-with-nvidia-microsoft-and-aws-to-deploy-ai-on-classified-networks/)
  • CNN Business 01.05.2026 (cnn.com/2026/05/01/tech/pentagon-ai-anthropic)
  • Breaking Defense 01.05.2026 (breakingdefense.com/2026/05/pentagon-clears-7-tech-firms-to-deploy-their-ai-on-its-classified-networks/)
  • Nextgov/FCW 01.05.2026 (nextgov.com/artificial-intelligence/2026/05/pentagon-makes-agreements-7-companies-add-ai-classified-networks/413264/)
  • Congressional Research Service IN12669 (congress.gov/crs-product/IN12669)
Share:
SicherheitKI-ModelleAutonomieGrundrechteUeberwachungUSAGovernanceMachtkonzentration