ISACA study: Companies don't know how fast they could shut down AI after security incident
What it really says
According to ISACA's '2026 AI Pulse Poll', there's limited human oversight of AI decisions, little transparency about AI use in organizations, and significant uncertainty about how fast AI systems can be shut down in emergencies.
Our assessment
This is a serious but solvable problem. Most companies introduced AI tools ad hoc — one employee subscribed to ChatGPT Plus, another uses Copilot. Nobody has an overview. The solution isn't rocket science: create an AI inventory, define responsibilities, establish emergency procedures. This is classic IT management applied to a new tool.
Relevance for Germany
German IT security officers should check: What AI systems run in our company? Who has access? How do we shut them down in an emergency? The BSI baseline protection catalog already contains AI-specific recommendations.
Fact check
The study was presented at RSA Conference 2026 and is based on a survey of over 6,000 IT professionals worldwide.
Source
- • ISACA 2026 AI Pulse Poll
- • RSA Conference 2026 Proceedings