Google grants Pentagon access to Gemini AI for 'any lawful purpose' - over 580 employees protest
What it really says
In late April 2026, Google signed a classified AI contract with the U.S. Department of Defense. The agreement allows the Pentagon to deploy Google's Gemini AI models within classified networks for 'any lawful governmental purpose.' The contract is structured as an amendment to an existing agreement and mirrors classified AI deals the Pentagon has previously signed with OpenAI and Elon Musk's xAI. The internal reaction at Google was swift and significant: more than 580 employees - including DeepMind researchers and over 20 directors, senior directors, and vice presidents - signed an open letter to CEO Sundar Pichai urging the company to reject classified military AI work. In their letter, employees warned of risks from lethal autonomous weapons systems, mass surveillance, and the fact that the classified nature of the work prevents any independent oversight. Compared to 2018, when roughly 4,000 Google employees protested against Project Maven (a Pentagon program using AI for drone footage analysis) and at least 12 resigned, the power dynamics have shifted: back then, the protests were enough to make Google back down. Today, they face a multi-billion-dollar classified AI industry, a Pentagon that has shown it will retaliate against companies that refuse its terms, and a Google that has already moved its own red lines.
Our assessment
The concern is partially justified but requires differentiation. The legitimate worries first: the phrase 'any lawful purpose' in a classified context is problematic because it effectively constitutes a blank check - no one outside the military can verify what the AI is actually used for. That DeepMind researchers are protesting shows these concerns come from people who understand the technology best, not outside critics. The shift since 2018 is real and concerning: market pressure has eroded tech companies' ethical principles. However, it's important to differentiate: not every military AI application is automatically unethical. AI in logistics, cyber defense, or disaster relief can save lives. The problem is the lack of transparency: without oversight, it's unclear whether the same AI also ends up in targeting systems or surveillance programs. The incident reveals a systemic problem: in a market where OpenAI, Google, Microsoft, and xAI all compete for Pentagon contracts, saying 'no' becomes a competitive disadvantage.
Relevance for Germany
Highly relevant for Germany and Europe. The EU has clearly positioned itself against lethal autonomous weapons systems and calls for international regulation. Germany has advocated for binding rules on autonomous weapons within the CCW (Convention on Certain Conventional Weapons) framework. When U.S. tech companies whose AI models are widely used in Europe simultaneously provide these models for classified military purposes, it raises fundamental questions: can European businesses and government agencies trustfully use AI models that could simultaneously be deployed in weapons systems? The EU AI Act prohibits certain AI practices like social scoring and mass facial recognition - but it only applies to the European market. What happens with the same models outside the EU is beyond European control. The case strengthens the argument for European AI sovereignty and open-source alternatives.
Fact check
The incident is well-documented through multiple independent sources. Fortune reports on May 4 about the broader context and comparison to Project Maven. The Next Web confirms the 580+ signatory count including DeepMind researchers and 20+ directors/VPs. Contract terms ('any lawful governmental purpose') are consistently reported by Fortune and SiliconANGLE. The comparison to Project Maven 2018 (4,000 signatures, 12 resignations) comes from Fortune's analysis. That similar contracts exist with OpenAI and xAI is mentioned by Fortune. The Washington Post reported on the open letter on April 27.
Source
- • Fortune 04.05.2026 (fortune.com/2026/05/04/google-employee-backlash-pentagon-ai-contract-power-waned-since-project-maven/)
- • The Next Web 28.04.2026 (thenextweb.com/news/google-employees-classified-military-ai-pentagon)
- • SiliconANGLE 27.04.2026 (siliconangle.com/2026/04/27/hundreds-google-employees-sign-letter-urging-ceo-reject-us-military-ai-use/)
- • Euronews 28.04.2026 (euronews.com/next/2026/04/28/google-employees-urge-ceo-to-reject-inhumane-classified-military-ai-use)
- • Android Headlines 28.04.2026 (androidheadlines.com/2026/04/google-pentagon-classified-ai-military-deal-protests.html)