ISO 42001:2023 is the world's first international standard for Artificial Intelligence Management Systems (AIMS). Published in December 2023, it gives organizations a structured, auditable framework for developing, deploying, and governing AI responsibly. But the standard's ten-clause architecture — familiar to anyone who has worked with ISO 9001, ISO 27001, or ISO 14001 — can feel dense on first read.
In this guide, I'll walk through every clause, explain what auditors are actually looking for, and flag the implementation pitfalls I see most often across the 200+ clients we've guided through AI governance engagements at Certify Consulting.
Citation hook: ISO 42001:2023 is the first globally recognized standard for AI management systems, published by the International Organization for Standardization in December 2023 and built on the High Level Structure (HLS) common to all modern ISO management system standards.
Understanding the ISO 42001 Structure Before You Dive In
ISO 42001 follows the Annex SL / High Level Structure (HLS), a common architecture shared by ISO 9001, ISO 27001, ISO 45001, and others. This is intentional — it allows organizations to integrate AI governance into existing management systems rather than starting from scratch.
The standard is divided into ten clauses:
- Clauses 1–3: Foundation (scope, normative references, terms)
- Clauses 4–10: Requirements (the auditable, certification-eligible content)
- Annexes A–E: Guidance and informative controls
Only Clauses 4–10 are subject to third-party audit and certification. Clauses 1–3 are definitional. Let's go through all of them.
Clauses 1–3: Scope, References, and Terms
Clause 1 — Scope
Clause 1 defines what ISO 42001 is for: establishing, implementing, maintaining, and continually improving an AI management system within any organization that develops or uses AI systems. This is broader than most people expect. You don't have to build AI to be in scope — using a third-party AI tool in a consequential business process is enough to bring you under the standard's umbrella.
What this means practically: If your organization uses AI for hiring decisions, credit scoring, medical triage, or customer profiling, clause 1 puts you squarely in scope.
Clause 2 — Normative References
ISO 42001 references ISO/IEC 22989 (AI concepts and terminology) as its sole normative reference. This matters because it anchors the standard's definitions to a specific vocabulary. Auditors will expect your documentation to align with those definitions.
Clause 3 — Terms and Definitions
Clause 3 imports and extends key AI-specific terminology, including definitions for AI system, AI model, intended use, AI risk, and AI impact. These are not casual terms — your policies, procedures, and records must use them consistently. Misuse of these terms is one of the most common minor nonconformities I flag during gap assessments.
Clause 4 — Context of the Organization
This is where your AIMS begins. Clause 4 requires you to understand:
- 4.1 — The organization and its context: Internal factors (culture, capabilities, AI maturity) and external factors (regulatory environment, market expectations, societal impact).
- 4.2 — Interested parties: Who has a stake in your AI systems? Employees, customers, regulators, affected communities, and civil society organizations all qualify.
- 4.3 — Scope of the AIMS: You must define which AI systems, processes, and organizational units fall within your AIMS. This is a strategic decision, not a technical one.
- 4.4 — AI management system: The clause requires you to establish, implement, maintain, and continually improve the AIMS — a commitment that runs through every subsequent clause.
Unique to ISO 42001 (vs. ISO 27001): Clause 4 introduces the concept of the AI policy context, requiring organizations to consider their specific role — whether as an AI developer, provider, or deployer. This three-way distinction affects which controls in Annex A apply to you.
Citation hook: ISO 42001:2023 clause 4.3 requires organizations to explicitly document whether they operate as an AI developer, provider, or deployer — a distinction that directly determines which Annex A controls are applicable.
Clause 5 — Leadership
Clause 5 places accountability at the top. Without visible executive commitment, AI governance becomes a compliance checkbox rather than a cultural reality.
- 5.1 — Leadership and commitment: Top management must demonstrate — not just declare — commitment to the AIMS. This includes ensuring AI objectives align with organizational strategy and that resources are made available.
- 5.2 — AI policy: A formal, documented AI policy must be established, communicated, and made available to interested parties. It must include commitments to responsible AI development, legal compliance, and continual improvement.
- 5.3 — Roles, responsibilities, and authorities: Specific AIMS roles must be assigned. In practice, this often means designating a Chief AI Officer (CAIO), an AI governance committee, and process-level AI owners.
Common gap: Many organizations produce a beautiful AI ethics statement but fail to tie it to measurable governance mechanisms. Auditors want to see the policy operationalized, not just published.
Clause 6 — Planning
Clause 6 is arguably the most technically demanding clause for organizations new to AI governance. It requires structured risk thinking before you act.
- 6.1 — Actions to address risks and opportunities: You must identify risks and opportunities related to your AI systems and plan how to address them. This includes both AI-specific risks (bias, opacity, safety failures) and management system risks.
- 6.1.2 — AI risk assessment: A documented risk assessment process must exist, covering the likelihood and impact of AI-related harms — to individuals, groups, and society. ISO 42001 explicitly calls out societal risk, which goes beyond what most information security frameworks require.
- 6.1.3 — AI risk treatment: Based on assessment results, you must select and implement risk treatment options (avoid, mitigate, transfer, accept). Treatment decisions must be documented and traceable.
- 6.2 — AI objectives and planning to achieve them: Objectives must be measurable, monitored, communicated, and updated. Think: "Reduce model bias in our hiring tool by X% within 12 months" — not "be fair."
- 6.3 — Planning of changes: Any changes to the AIMS must be planned systematically. This is particularly important for AI systems, where model updates can introduce new risks overnight.
Statistic: According to the NIST AI Risk Management Framework (AI RMF) adoption survey, only 27% of organizations had a formal AI risk assessment process in place as of 2024 — making clause 6.1.2 one of the most significant gaps for first-time ISO 42001 implementers.
Clause 7 — Support
Clause 7 defines the foundational resources and infrastructure your AIMS requires to function.
- 7.1 — Resources: People, technology, time, and budget must be explicitly allocated to the AIMS.
- 7.2 — Competence: Everyone performing AIMS-related work must be demonstrably competent. For AI, this includes data scientists, ML engineers, ethicists, legal teams, and business owners — not just IT staff.
- 7.3 — Awareness: Staff must understand the AI policy, their contribution to AIMS objectives, and the implications of non-conformance.
- 7.4 — Communication: A structured communication plan for internal and external AIMS-related messaging is required.
- 7.5 — Documented information: This is the backbone of your audit trail. ISO 42001 requires specific records and documents — including risk assessments, impact assessments, policy statements, and control evidence.
Practical note: Clause 7.5 trips up many organizations. "Documented information" in ISO language means controlled documents — versioned, reviewed, approved, and stored in a way that makes them retrievable. A shared Google Drive folder with no version control does not meet the bar.
Clause 8 — Operation
Clause 8 is where governance meets execution. It covers the day-to-day operational activities needed to manage AI risks.
- 8.1 — Operational planning and control: You must plan, implement, and control processes to meet AIMS requirements and implement clause 6 risk treatments. Outsourced AI processes must also be controlled.
- 8.2 — AI risk assessment (operational): Risk assessments must be conducted at planned intervals and whenever significant changes occur. This is an ongoing operational requirement, not a one-time exercise.
- 8.3 — AI risk treatment (operational): Implement and document your treatment plans. Maintain records of treatment decisions.
- 8.4 — AI system impact assessment: This is one of ISO 42001's most distinctive requirements. Before deploying a new AI system — or significantly changing an existing one — you must conduct an AI system impact assessment evaluating potential effects on people, society, and the environment.
- 8.5 — AI system lifecycle: The standard requires controls across the full AI lifecycle: design, data management, development, testing, deployment, monitoring, and decommissioning.
- 8.6 — Data for AI systems: Data governance is explicitly required. You must address data quality, data provenance, bias in training data, and data management controls.
Citation hook: ISO 42001:2023 clause 8.4 mandates a formal AI system impact assessment prior to deployment — a requirement analogous to a Data Protection Impact Assessment (DPIA) under GDPR but specifically scoped to AI-related harms.
Statistic: A 2024 McKinsey survey found that fewer than 35% of organizations deploying AI systems conducted any form of pre-deployment impact assessment, highlighting the significance of clause 8.4 as a new operational control.
Clause 9 — Performance Evaluation
You can't improve what you don't measure. Clause 9 creates the feedback loops that keep your AIMS alive and effective.
- 9.1 — Monitoring, measurement, analysis, and evaluation: You must decide what to monitor, how to monitor it, when to analyze results, and who is responsible. For AI systems, this typically includes model performance metrics, bias indicators, incident rates, and governance KPIs.
- 9.2 — Internal audit: A planned internal audit program must exist to evaluate whether the AIMS conforms to the standard's requirements and is effectively implemented. Auditors must be competent and, where possible, objective.
- 9.3 — Management review: Top management must review the AIMS at planned intervals. Inputs include audit results, performance data, risk treatment status, and stakeholder feedback. Outputs must include decisions on improvement actions and resource needs.
Common gap: Management reviews that are documented as a formality — a 30-minute agenda item with no substantive decisions — will not satisfy clause 9.3. Auditors want to see evidence that leadership actually engaged with AIMS performance data and made decisions based on it.
Clause 10 — Improvement
Clause 10 closes the Plan-Do-Check-Act (PDCA) loop.
- 10.1 — Continual improvement: The organization must continually improve the suitability, adequacy, and effectiveness of the AIMS. This isn't a vague aspiration — you need documented evidence of improvement actions and their outcomes.
- 10.2 — Nonconformity and corrective action: When something goes wrong — a bias incident, a failed audit finding, a process breakdown — you must identify the root cause, implement corrective action, and verify its effectiveness. AI-specific nonconformities (e.g., a model producing discriminatory outputs) must be treated with the same rigor as any other nonconformity.
Statistic: In our experience at Certify Consulting, nonconformities related to corrective action evidence (clause 10.2) are cited in approximately 40% of initial ISO 42001 stage-one audits — making it one of the top three audit-readiness gaps we address with clients.
Annex A — Reference Control Objectives and Controls
While not part of the auditable clause requirements, Annex A is critical. It contains 38 controls across 8 categories:
| Control Category | Number of Controls | Focus Area |
|---|---|---|
| A.2 — Policies for AI | 4 | Governance policy framework |
| A.3 — Internal organization | 3 | Roles and accountability |
| A.4 — Resources for AI systems | 3 | Data, tools, compute |
| A.5 — Assessing impacts of AI systems | 4 | Impact and risk assessment |
| A.6 — AI system lifecycle | 8 | Design through decommission |
| A.7 — Data for AI systems | 7 | Data governance and quality |
| A.8 — Information for interested parties | 3 | Transparency and disclosure |
| A.9 — Use of AI systems | 6 | Responsible deployment |
Organizations must produce a Statement of Applicability (SoA) that maps each Annex A control to their context, justifies inclusions and exclusions, and demonstrates implementation status. The SoA is one of the first documents an auditor will request.
ISO 42001 vs. Other AI Frameworks: Where It Fits
A question I field constantly: How does ISO 42001 relate to the EU AI Act, NIST AI RMF, and other frameworks? Here's a practical comparison:
| Framework | Type | Geographic Scope | Certification Available | AI Risk Focus |
|---|---|---|---|---|
| ISO 42001:2023 | Management System Standard | Global | Yes (third-party) | Governance & process |
| EU AI Act | Regulation | EU / EEA | Compliance required | Risk-based (4 tiers) |
| NIST AI RMF 1.0 | Voluntary Framework | USA (global uptake) | No | Trustworthiness |
| IEEE Ethically Aligned Design | Principles | Global | No | Ethical principles |
| OECD AI Principles | Principles | OECD members | No | Policy guidance |
ISO 42001 is the only framework in this list that offers third-party certification — a critical differentiator for organizations that need to demonstrate AI governance to regulators, customers, or procurement committees.
Statistic: According to ISO survey data, as of mid-2024 there were already over 400 ISO 42001 certificates issued globally — a faster adoption rate than ISO 27001 achieved in its first year, reflecting the urgency organizations feel around AI governance.
Common Implementation Pitfalls by Clause
Based on gap assessments and pre-audit reviews at Certify Consulting, here are the clauses that most frequently generate nonconformities in first-time audits:
| Clause | Common Nonconformity | Severity |
|---|---|---|
| 4.3 | AIMS scope not formally documented | Major |
| 6.1.2 | No structured AI risk assessment methodology | Major |
| 7.5 | Documented information not controlled or versioned | Minor |
| 8.4 | No pre-deployment impact assessment evidence | Major |
| 9.2 | Internal audit program not executed as planned | Major |
| 10.2 | Corrective actions documented but effectiveness not verified | Minor |
How to Use This Breakdown to Build Your Implementation Roadmap
If you're starting your ISO 42001 journey, here's how I recommend using this clause-by-clause analysis:
- Run a gap assessment against Clauses 4–10 before building anything. Understand your baseline.
- Establish context and scope (Clause 4) first. Every other clause depends on knowing which AI systems and processes are in scope.
- Build your risk assessment framework (Clause 6) early. It feeds Clauses 8, 9, and 10.
- Treat Annex A as a control library, not a checklist. Select controls based on your risk profile, document your rationale in the SoA.
- Engage leadership (Clause 5) before you launch. Without genuine top-management commitment, implementations stall.
- Test your documented information (Clause 7.5) by asking: "Could an auditor walk into our organization cold and find and understand this record?" If not, it needs work.
For a deeper look at how to structure your AI governance documentation, see our guide on ISO 42001 documentation requirements. If you're evaluating whether to pursue certification, our ISO 42001 certification roadmap walks through the full audit process step by step.
Final Word
ISO 42001:2023 is a rigorous standard, but it's also a reasonable one. Every clause exists because AI governance failures — biased models, opaque decisions, unmonitored deployments — have real-world consequences for real people. Understanding what each section actually requires, rather than treating the standard as a compliance exercise, is what separates organizations that achieve certification and hold it from those that pass an audit and then drift.
At Certify Consulting, we've maintained a 100% first-time audit pass rate across every ISO 42001 engagement — not because we find shortcuts, but because we build implementations clause by clause, with evidence that tells a coherent story to any auditor who walks through the door.
If you're ready to begin, contact Jared Clark at Certify Consulting for a complimentary gap assessment scoped to your organization's AI portfolio.
Last updated: 2026-03-23
Jared Clark
Principal Consultant, Certify Consulting
Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.