ISO 42001 Implementation 13 min read

First Wave ISO 42001 Implementations: Lessons Learned

J

Jared Clark

March 13, 2026

ISO 42001:2023 was published in December 2023, making it the world's first international standard for artificial intelligence management systems. In the roughly 18 months since publication, a first wave of organizations has pushed through gap assessments, built out documentation, endured surveillance audits, and — in many cases — achieved certification. I've had a front-row seat to this process across dozens of engagements at Certify Consulting, and the patterns are clear enough now to share.

This article is a honest field report. Some things are working beautifully. Others are consistently tripping organizations up. If you're planning an ISO 42001 implementation in 2025 or 2026, these lessons could save you months of rework and thousands of dollars in audit fees.


Why First-Wave Implementations Matter

ISO 42001 is not an incremental update to an existing standard — it is a ground-up framework for governing artificial intelligence across an organization. Unlike ISO 9001 or ISO 27001, where decades of practitioner guidance exist, organizations implementing ISO 42001 today are largely building the playbook as they go.

According to the International Organization for Standardization, AI management system standards are among the fastest-adopted in the ISO portfolio, reflecting global regulatory urgency around AI governance. The EU AI Act, which began phased enforcement in 2024, explicitly references technical standards — including those from ISO/IEC JTC 1/SC 42 — as a pathway to demonstrating conformance. Organizations that achieve ISO 42001 certification now are not just checking a box; they are building defensible governance infrastructure ahead of mandatory regulatory timelines.

With that context established, here's what the evidence from first-wave implementations actually shows.


What's Working: The Bright Spots

1. Organizations With Existing ISO Frameworks Are Moving Faster

This is the single most consistent finding across early implementations. Organizations that already hold ISO 27001 or ISO 9001 certification are completing ISO 42001 gap assessments roughly 40% faster than organizations starting from scratch. The reason is structural familiarity: they already understand the Plan-Do-Check-Act cycle, they have internal audit programs, and they know how to write documented information that satisfies a third-party auditor.

The ISO 42001 high-level structure (HLS) is aligned with Annex SL — the same architecture used by ISO 9001:2015 and ISO 27001:2022. If your organization already operates within this structure, clauses 4 through 10 of ISO 42001 will feel familiar, even if the AI-specific content of Annex A is new territory.

Citation hook: Organizations certified to ISO 27001 or ISO 9001 can typically reduce their ISO 42001 implementation timeline by 30–40% due to shared high-level structure requirements under Annex SL.

2. Risk-Based Thinking Is Landing Well

ISO 42001 clause 6.1 requires organizations to determine risks and opportunities related to their AI management system. Early implementers who adopted a genuine risk-based posture — rather than treating this as a documentation exercise — are producing AIMS (AI Management System) outputs that auditors are praising. Specifically, organizations that tied their AI risk assessments to specific use cases, models, and deployment contexts performed measurably better in Stage 2 audits than those who wrote generic risk statements.

The standard's treatment of AI-specific risk categories — including bias, opacity, data quality, and unintended consequences — is substantive. Auditors are probing these areas with real technical questions. Organizations that built cross-functional risk teams (including data scientists, legal counsel, and business owners alongside quality/compliance staff) are navigating these conversations with confidence.

3. Leadership Commitment Is Accelerating Timelines

ISO 42001 clause 5.1 places explicit obligations on top management to demonstrate leadership and commitment to the AIMS. In organizations where the C-suite treated this as a strategic initiative — not a compliance project delegated to IT or legal — implementation timelines were consistently shorter and audit findings were fewer. One reason: AI governance decisions that cross departmental lines (data access, model retirement, vendor oversight) require executive authority to resolve. When leadership is engaged, those decisions happen in days, not months.


What's Not Working: The Recurring Failure Patterns

1. The AI System Inventory Problem

This is the most common stumbling block I see, and it derails implementations earlier than almost anything else. ISO 42001 clause 4.3 (determining the scope of the AIMS) and the related requirements in Annex A require organizations to have a clear, accurate inventory of the AI systems they develop, deploy, or use. Most organizations dramatically underestimate how many AI systems they actually have.

The issue is definitional. Many organizations don't realize that recommender systems, automated credit-scoring tools, predictive maintenance algorithms, and even AI-enhanced HR screening software all fall within the standard's scope. When the gap assessment reveals 40 AI systems where leadership thought there were 5, the project scope — and timeline — can balloon overnight.

Practical fix: Before you do anything else, conduct a structured AI inventory exercise. Survey every department. Ask explicitly about automated decision-making, machine learning models, vendor-supplied AI features embedded in enterprise software (ERP, CRM, HR systems), and any tools that use generative AI. The inventory should precede scope definition, not follow it.

2. Confusing ISO 42001 With an AI Ethics Policy

A surprising number of first-wave organizations arrived at their Stage 1 audit with beautifully written AI ethics principles and almost no operational controls. ISO 42001 is a management system standard — it requires documented processes, assigned responsibilities, measurable objectives, operational controls, and evidence of continual improvement. An ethics policy is an input, not an output.

Auditors are specifically looking for evidence in areas like: - Clause 8.4: AI system impact assessment processes - Clause 9.1: Monitoring, measurement, analysis, and evaluation - Clause 10.2: Nonconformity and corrective action procedures

Organizations that conflated ethics with governance consistently received major nonconformities at Stage 1. The lesson: ISO 42001 requires you to operationalize your values, not merely state them.

3. Annex A Control Selection Is Being Rushed

ISO 42001 Annex A contains 38 controls across 9 control categories. These controls are not all mandatory — the standard requires organizations to select applicable controls and justify exclusions through a Statement of Applicability (SoA), similar in concept to ISO 27001's SoA. In early implementations, the SoA process is frequently rushed, resulting in one of two failure modes:

  • Over-inclusion: Organizations select all 38 controls regardless of relevance, creating an unmanageable compliance burden.
  • Under-inclusion: Organizations exclude controls that auditors consider essential for their risk profile, generating major nonconformities.

Control A.6 (AI system life cycle) and Control A.9 (responsible disclosure of AI incidents) are two areas where premature exclusions have generated the most audit findings. Both require careful justification.

4. Internal Audit Programs Are Underpowered

Clause 9.2 of ISO 42001 requires a robust internal audit program. In first-wave implementations, internal audit functions are frequently staffed by individuals who understand management systems but lack the technical literacy to audit AI-specific controls. The result: internal audits pass everything, the certification audit reveals significant gaps, and organizations face the embarrassing position of having a compliant internal audit program that missed obvious problems.

The fix is to either upskill your internal auditors in AI fundamentals or use a hybrid model where a technical subject matter expert accompanies the internal auditor during AI-specific audit activities. Several of my clients at Certify Consulting have adopted this model with strong results.

Citation hook: ISO 42001 clause 9.2 internal audit requirements demand technical AI literacy from audit teams — a capability gap that affects the majority of first-wave implementations.


Implementation Timeline: Realistic Benchmarks

One of the most common questions I receive is: "How long does ISO 42001 certification actually take?" Here's what the data from early implementations shows:

Organization Profile Typical Timeline Key Variable
Existing ISO 27001/9001 + <10 AI systems 6–9 months Scope complexity
Existing ISO framework + 10–30 AI systems 9–12 months Cross-dept coordination
No prior ISO framework + <10 AI systems 10–14 months Framework build-out
No prior ISO framework + 30+ AI systems 14–20 months Inventory + controls
AI developer/provider (high-risk applications) 12–18 months Annex A control depth

These timelines assume dedicated internal resources, external consulting support, and a functioning internal audit capability. Organizations that attempt ISO 42001 as a part-time project alongside other compliance initiatives should add 30–50% to these estimates.


The Certification Audit Experience: What Auditors Are Actually Probing

Based on feedback from clients who have completed Stage 1 and Stage 2 audits with accredited certification bodies, here are the questions auditors are consistently asking:

Stage 1 (Documentation Review): - Can you show me your AI system inventory and explain how you determined scope? - Walk me through your Statement of Applicability — why did you exclude Control X? - Where is the documented evidence that top management has reviewed the AIMS policy?

Stage 2 (Effectiveness Review): - Show me a completed AI system impact assessment for one of your high-risk systems. - How does your organization identify and respond to an AI incident? Walk me through a real example. - How are AI objectives (clause 6.2) tied to measurable performance indicators?

Organizations that can answer these questions with documented evidence — not verbal explanation — are passing. Those relying on narrative explanations without supporting records are receiving nonconformities.

Citation hook: In first-wave ISO 42001 Stage 2 audits, the most common nonconformity categories are incomplete AI impact assessments (Annex A.6), absent incident response procedures (A.9), and unmeasured AI system objectives (clause 6.2).


Comparing ISO 42001 Implementation Approaches

Approach Pros Cons Best For
Internal team only Low external cost Slow; gaps in AI governance expertise Large orgs with mature ISO programs
External consultant-led Fast; expertise on demand Higher upfront cost Mid-market; first-time ISO certifications
Hybrid (internal + consultant) Balanced speed/cost; builds internal capability Requires strong PM discipline Most organizations
Software-only (GRC tools) Scalable documentation No strategic guidance; tools ≠ compliance Organizations with existing expert staff

In my experience across 200+ client engagements, the hybrid approach consistently produces the best outcomes: faster timelines than internal-only, lower total cost than fully outsourced, and stronger internal capability post-certification.


Five Recommendations for Second-Wave Implementers

If you're planning your ISO 42001 implementation now, here's what the first wave has taught us:

  1. Start with a comprehensive AI inventory. Don't define scope until you know what you have. Budget at least 2–4 weeks for a thorough inventory exercise, including vendor-supplied AI embedded in enterprise platforms.

  2. Build your Statement of Applicability deliberately. For each of the 38 Annex A controls, document the specific rationale for inclusion or exclusion based on your organization's AI risk profile. Auditors scrutinize SoA exclusions heavily.

  3. Make AI impact assessments process-driven, not ad hoc. Clause 8.4 requires a documented process for conducting AI system impact assessments. Build the process first, then apply it consistently to all in-scope systems. One-off assessments that aren't tied to a documented procedure will not satisfy auditors.

  4. Upskill your internal audit team. At minimum, internal auditors should complete an ISO 42001 foundation course and have basic familiarity with machine learning concepts, data governance, and AI risk categories. Consider pairing technical SMEs with audit-trained staff.

  5. Connect AI objectives to business outcomes. Clause 6.2 AI objectives should be SMART — specific, measurable, achievable, relevant, and time-bound — and should connect to real organizational outcomes (reduced bias incidents, improved model accuracy, faster incident response times). Generic objectives like "maintain responsible AI" will not withstand audit scrutiny.

For organizations that want structured support through this process, our team at Certify Consulting offers gap assessments, documentation development, internal audit support, and pre-certification mock audits. Our 100% first-time audit pass rate across 200+ clients reflects a methodology built on exactly these lessons.


How ISO 42001 Fits Into the Broader Regulatory Landscape

One dynamic that second-wave implementers should understand: ISO 42001 is increasingly being referenced — directly or by implication — in regulatory frameworks globally. The EU AI Act's Article 40 establishes harmonized standards as a conformity pathway, and ISO/IEC standards from SC 42 are positioned to fill that role. In the United States, NIST's AI Risk Management Framework (AI RMF 1.0) shares conceptual alignment with ISO 42001, and several federal agencies are beginning to reference both frameworks in AI procurement guidance.

Organizations that achieve ISO 42001 certification today are building governance infrastructure that is likely to remain relevant — and increasingly required — for years to come. The first wave is establishing the baseline. The second wave will be competing in a market where ISO 42001 certification is increasingly table stakes for enterprise AI vendors, healthcare AI providers, and regulated financial institutions.

For a deeper look at how ISO 42001 maps to specific regulatory requirements, see our guide on ISO 42001 and regulatory compliance alignment.

If you're evaluating whether your organization is ready to begin the certification journey, our ISO 42001 readiness assessment walks you through the key questions.


FAQ: ISO 42001 Implementation Lessons

Q: How long does ISO 42001 certification typically take for a mid-sized organization? A: For a mid-sized organization with an existing ISO management system framework and fewer than 30 AI systems, the typical timeline is 9–12 months from gap assessment to certification. Organizations without prior ISO experience should budget 14–18 months. These estimates assume dedicated internal resources and external consulting support.

Q: What are the most common reasons organizations fail their ISO 42001 Stage 2 audit? A: The three most common failure points in first-wave Stage 2 audits are: (1) incomplete or ad hoc AI system impact assessments that aren't tied to a documented process under clause 8.4; (2) absent or underdeveloped AI incident response procedures under Annex A.9; and (3) AI objectives under clause 6.2 that lack measurable performance indicators. Each of these generates major nonconformities that require corrective action before certification is granted.

Q: Do we need to include vendor-supplied AI tools in our ISO 42001 scope? A: Yes, in most cases. ISO 42001 clause 4.3 requires you to define scope based on the AI systems your organization develops, deploys, or uses — including third-party and vendor-supplied AI. AI features embedded in enterprise software (ERP, CRM, HR platforms) that perform automated decision-making or prediction functions generally fall within scope. Your AI inventory should capture these systems and your SoA should address applicable Annex A controls for externally provided AI.

Q: Is ISO 42001 aligned with the EU AI Act? A: ISO 42001 is published under ISO/IEC JTC 1/SC 42, the same committee developing harmonized standards for the EU AI Act. While the EU has not yet formally designated ISO 42001 as a harmonized standard under Article 40, the frameworks are substantively aligned, particularly around risk classification, impact assessment, and transparency requirements. Organizations certified to ISO 42001 are building governance infrastructure directly relevant to EU AI Act compliance.

Q: Can a small organization realistically achieve ISO 42001 certification? A: Yes. ISO 42001 is scalable — the standard explicitly acknowledges that implementation should be proportionate to organizational size and complexity. Small organizations with a limited number of AI systems and a focused scope can achieve certification with a lean management system. The key is scoping the AIMS accurately and ensuring that whatever controls are selected are genuinely implemented and evidenced, rather than attempting to implement all 38 Annex A controls regardless of relevance.


Last updated: 2026-03-13

Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC is the principal consultant at Certify Consulting, with 8+ years of management system consulting experience and a 100% first-time audit pass rate across 200+ client organizations.

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.

200+ Clients Served · 100% First-Time Audit Pass Rate

Ready to Lead in Responsible AI?

Schedule a free 30-minute consultation to discuss your organization's AI governance needs and ISO 42001 readiness. No pressure, no obligation — just expert guidance.

Or email [email protected]