KI
KIneAngst
All News
🟡 Partially justified

AI Safety Report 2026: existing safety practices are insufficient

Source: heise online·April 4, 2026

What it really says

The international AI Safety Report 2026, authored by researchers from multiple countries, concludes that the safety practices currently used by AI labs are not sufficient to manage the risks of highly capable models. The report criticises a lack of external auditing, unclear responsibilities and insufficient transparency in training and deployment. It calls for binding standards instead of voluntary commitments.

Our assessment

The report confirms a line visible since the Bletchley and Seoul summits: voluntary commitments by large providers are a start, but no substitute for independent assessment. Anyone worried about uncontrolled AI development finds no all-clear here, but also no cause for panic. The message is sober: tools for risk management partially exist but are not applied consistently. This is exactly what the EU AI Act addresses with its GPAI obligations, which are already in force.

Relevance for Germany

For Germany, where trust in AI remains low according to multiple Bitkom surveys, the report is an important argument for more external audits and a well-resourced BSI and a capable national AI supervisory authority. Without independent auditors, every risk assessment remains marketing.

Fact check

The report is the successor to the International Scientific Report on AI Safety first published in 2024 following the Bletchley summit. The statements about the inadequacy of current practices come from the report itself, as summarised by heise.

Source

  • heise online 04.04.2026 (reporting)
  • AI Safety Report 2026 (primary source, international scientific panel)
Share:
SicherheitStudieGovernanceKI-Modelle