KI
KIneAngst
All News
🟡 Partially justified

OpenAI, Anthropic and Google share attack data for the first time to counter Chinese 'distillation attacks'

Source: Bloomberg·April 6, 2026

What it really says

Bloomberg reported on 6 April 2026 that OpenAI, Anthropic and Google have for the first time started systematically exchanging data on so-called adversarial distillation through their joint industry body, the Frontier Model Forum. In adversarial distillation, third parties make mass API queries to top models like GPT, Claude or Gemini and use the answers to train smaller, cheaper models. The trigger is an investigation Anthropic published in February 2026 according to which the Chinese providers DeepSeek, Moonshot AI and MiniMax sent more than 16 million queries to Claude through about 24,000 fraudulent accounts. The three US labs now plan to share detection patterns, suspicious account profiles and API traffic signatures via the Frontier Model Forum.

Our assessment

The story has two layers that should be kept separate. Technically, distillation - training smaller models from the outputs of a larger one - is an established and not inherently illegal technique. What the US providers complain about is the alleged systematic violation of terms of service, fake accounts and the sheer scale of extraction. Geopolitically, the alliance is another piece in the tech decoupling between the US and China: three dominant providers coordinate to shut out an identifiable competitor block. That is understandable and poses no short-term risk for Western customers, but it can shrink the foundation-model market to even fewer players. The US labs have not yet made full evidence on individual violations public; Chinese voices like Global Times read the move as a reaction to China's catch-up in open source. Both views contain a kernel of truth.

Relevance for Germany

For German companies that increasingly use both OpenAI/Anthropic/Google and DeepSeek- or Qwen-based models, things get more complicated. Anyone training their own pipelines on outputs of Claude or GPT should re-read the terms of service - the new attention from US providers makes stronger contractual enforcement more likely. Politically, the Frontier Model Forum is not a neutral body but a private club of the three biggest US providers; trust in AI safety should not flow exclusively from that source. The EU should use this as a reason to equip its own AI safety structures - the AI Office and the high-risk conformity system - with sufficient technical expertise, rather than relying on US industry bodies.

Fact check

The existence of the cooperation and the role of the Frontier Model Forum are reported consistently by Bloomberg, The Decoder and The Japan Times. The numbers - 24,000 accounts and 16 million queries - come from Anthropic's own February 2026 statement and have not been confirmed or convincingly denied by DeepSeek, Moonshot AI or MiniMax. The exact technical detection signals exchanged between the US providers are not publicly documented in detail; the 'billions in losses per year' estimate comes from US government sources and cannot be independently verified.

Source

  • Bloomberg 06.04.2026 (initial reporting)
  • The Decoder 07.04.2026
  • The Japan Times 07.04.2026
  • Anthropic official statement 23.02.2026 (distillation investigation)
  • CNBC 24.02.2026 (background on distillation accusation)
Share:
KI-ModelleSicherheitUSAWettbewerbGovernance