Imagine a self-driving car company rolling out a new fleet of autonomous vehicles. The enterprise risk team applies a traditional risk framework to assess financial, operational, and regulatory risks. At the same time, AI engineers must evaluate algorithmic bias, data drift, model robustness, and cybersecurity threats.
This dual challenge highlights why ISO 31000:2018 (Risk Management – Guidelines) and ISO 23894:2023 (Artificial Intelligence — Risk Management) are both needed.
Consider a real-world scenario: an autonomous vehicle misinterprets a stop sign partially covered in graffiti as a speed-limit sign, resulting in unsafe behaviour. From an enterprise perspective, the focus is on liability, regulatory compliance, and reputational damage. From an AI risk perspective, the concern shifts to root causes—biased training data, inadequate validation, model drift, or adversarial manipulation.
This example illustrates a critical gap: traditional risk frameworks were not designed to manage AI-specific risks.
Why ISO 23894 Evolved
ISO 23894 emerged because AI systems behave fundamentally differently from traditional IT systems. While ISO 31000 provides an excellent enterprise-wide risk foundation, it does not fully address the unique characteristics of AI.
Key drivers behind ISO 23894 include:
The Black Box Problem – Limited explainability of AI decisions
Bias and Ethical Risks – Discrimination in hiring, lending, healthcare, and justice systems
Adversarial Threats – Inputs intentionally crafted to mislead AI models
Continuous Learning – AI systems evolve post-deployment, changing their risk profile
Human–AI Accountability – Blurred responsibility between human decisions and AI outputs
Regulatory Pressure – Alignment with frameworks such as the EU AI Act and NIST AI RMF
These risks required a dedicated, AI-centric risk standard, leading to ISO 23894.
ISO 23894 vs ISO 31000 – Comparative Overview
ISO 31000
Enterprise-wide, principle-based risk framework
Covers strategic, operational, financial, and compliance risks
Technology-agnostic and industry-neutral
Focuses on governance, leadership, and integration into business processes
ISO 23894
Purpose-built for AI systems and AI-enabled decisions
Addresses model lifecycle risks (design, training, deployment, monitoring)
Explicit focus on bias, transparency, robustness, and ethical outcomes
Supports AI governance, assurance, and regulatory alignment
In short: ISO 31000 governs the organisation; ISO 23894 governs the AI.
https://www.lakera.ai/blog/ai-risk-management
How Organisations Should Use Both
This is not an “either/or” decision.
Use ISO 31000 as the foundation for enterprise risk governance
Apply ISO 23894 for AI-specific risk identification, assessment, and treatment
Embed AI risks into enterprise risk registers and reporting
Adopt a lifecycle approach with continuous monitoring post-deployment
Together, they create a resilient, future-ready risk management ecosystem.
Final Thoughts
AI offers transformative benefits—but it also introduces risks that traditional frameworks were never designed to handle. If your organisation relies on AI, enterprise risk management alone is not enough.
By combining ISO 31000’s governance discipline with ISO 23894’s AI-specific risk controls, organisations can move beyond compliance toward responsible, trustworthy, and resilient AI adoption.
Because when an AI system misreads a stop sign, the real failure isn’t technical—it’s governance.
Read linked article
https://www.secsolutionshub.com/ai-and-the-future-of-cybersecurity-opportunities-risks-and-the-way-forward/
