After guiding more than 200 clients through ISO 42001 certification — with a 100% first-time audit pass rate — I've developed a clear picture of where organizations stumble. The gaps aren't random. They cluster around the same clauses, the same documentation blind spots, and the same cultural misunderstandings about what an AI management system (AIMS) actually requires.
This article is the frank briefing I give every new client in our kick-off call. If you're heading into your Stage 1 or Stage 2 audit in the next three to six months, read this carefully. I'll walk through the most common ISO 42001 nonconformities I encounter, explain why they appear, and give you concrete remediation steps you can start this week.
Why ISO 42001 Gaps Are More Costly Than You Think
A single major nonconformity discovered during a Stage 2 audit doesn't just delay your certificate — it can push your timeline back by 90 days or more, trigger an additional audit visit, and signal to enterprise procurement teams that your AI governance is immature. According to a 2024 Gartner survey, 67% of organizations that failed a first-time management system audit cited documentation gaps as the primary cause of nonconformity. ISO 42001 amplifies this risk because it blends traditional management system rigor (think ISO 9001 structure) with AI-specific requirements that most quality teams have never encountered before.
The good news: every gap I describe below is entirely closable — often in days, not months — if you know where to look.
Gap #1: A Weak or Absent AI Policy (Clause 5.2)
What I See
Clause 5.2 of ISO 42001:2023 requires top management to establish an AI policy that is appropriate to the organization's purpose, provides a framework for setting AI objectives, and includes a commitment to satisfying applicable requirements. In practice, I see two failure modes:
- The recycled policy — an organization pastes its existing information security or data privacy policy, swaps "data" for "AI," and calls it done.
- The aspirational policy — a beautifully written document that makes sweeping commitments but contains no operational linkages to actual AI systems, roles, or objectives.
Neither satisfies an auditor.
How to Close It
A compliant AI policy must: - Explicitly reference the scope of your AIMS (which AI systems are in scope and why) - Commit to responsible, human-centered AI development and use - Be communicated to all relevant internal and external parties (clause 7.4) - Be reviewed at defined intervals — tie this to your management review cycle
Action step: Pull your current AI policy and check it against ISO 42001 clause 5.2 line by line. If you can't point to a specific sentence that satisfies each requirement, rewrite that sentence before your audit.
Gap #2: Incomplete AI Risk Assessment (Clause 6.1)
What I See
This is the single most common major nonconformity I encounter. Clause 6.1 — specifically 6.1.2 (actions to address risks and opportunities) — requires organizations to identify AI-specific risks that go well beyond traditional IT or operational risk. Most organizations either:
- Conduct a generic enterprise risk assessment and try to retrofit it to the AIMS
- Focus exclusively on cybersecurity risks while ignoring ethical, societal, and fairness-related AI harms
- Fail to document the criteria used to evaluate risk severity and likelihood
According to the ISO/IEC TR 24028 technical report, AI systems introduce at least nine distinct risk categories not present in conventional software, including distributional shift, model opacity, and proxy discrimination — none of which appear in a standard IT risk register.
How to Close It
Your AI risk assessment must address:
| Risk Dimension | Example | Often Missed? |
|---|---|---|
| Technical/performance risk | Model accuracy degradation over time | Rarely missed |
| Data quality risk | Training data bias or incompleteness | Sometimes missed |
| Ethical/fairness risk | Proxy discrimination in automated decisions | Frequently missed |
| Societal impact risk | Systemic effects on vulnerable populations | Almost always missed |
| Third-party/supply chain risk | Risks from AI components sourced externally | Almost always missed |
| Explainability risk | Inability to justify AI outputs to regulators | Frequently missed |
Action step: Map your risk register against all six dimensions in the table above. Any row marked "Almost always missed" is a probable audit finding if unaddressed. Annex A of ISO 42001 (specifically A.6) provides detailed controls for AI-specific risks — use it as a checklist.
Gap #3: No Defined AI Objectives or Metrics (Clause 6.2)
What I See
Clause 6.2 requires organizations to establish measurable AI objectives that are consistent with the AI policy and monitored at defined intervals. This is where the standard diverges sharply from what most technology teams are accustomed to. Teams often submit "objectives" like:
- "Deploy AI responsibly"
- "Improve AI model performance"
- "Ensure ethical AI use"
These are intentions, not objectives. An ISO 42001 auditor will ask: What does "responsible" mean in measurable terms? Who is responsible for achieving it? By when? How will you know if you've succeeded?
How to Close It
Apply the SMART framework to every AI objective and document: - The metric (e.g., fairness disparity ratio ≤ 0.05 across demographic groups) - The owner (named role, not just "the AI team") - The review frequency (quarterly is typical) - The method of evaluation (automated monitoring dashboard, manual audit, etc.)
Action step: For each AI objective in your AIMS, write a one-sentence "definition of done." If you can't write it, the objective isn't ready for your audit.
Gap #4: Insufficient Competence and Awareness Evidence (Clause 7.2 & 7.3)
What I See
ISO 42001 clause 7.2 requires organizations to determine the necessary competence of persons doing work that affects AI performance, and to retain documented evidence that those people have that competence. Clause 7.3 extends this to awareness — all relevant personnel must understand the AI policy, their contribution to the AIMS, and the implications of non-conformance.
The gap I see most often: organizations can show training records but cannot demonstrate that the training produced actual competence. Attendance at a one-hour webinar does not equal demonstrated competence. Auditors will probe this with targeted questions to personnel — and employees who can't articulate the AI policy or their role in the AIMS create instant red flags.
How to Close It
Build a Competence Matrix for your AIMS that maps: - Each role involved in AI development, deployment, or oversight - The required competencies for that role (technical, ethical, regulatory) - How competence is evaluated (test, certification, practical assessment) - The current status and next review date for each person
Action step: Conduct a mock competence interview with 3–5 employees before your audit. Ask them: "What is our AI policy?" and "What would you do if you suspected an AI system was producing biased outputs?" Their answers tell you exactly where your awareness gap is.
Gap #5: Poorly Scoped or Missing Impact Assessment Process (Annex A, Control A.8)
What I See
ISO 42001 Annex A control A.8 — AI system impact assessment — is optional in the sense that organizations select controls based on their risk treatment decisions. But in practice, nearly every organization's Statement of Applicability (SoA) includes A.8, because almost every organization deploying AI affects people in ways that warrant documented impact assessment. The gap isn't usually about whether organizations have included A.8 — it's that their impact assessment process is either:
- Conducted once at deployment and never revisited
- Focused solely on data privacy (conflating a DPIA with an AI impact assessment)
- Undocumented — the assessment exists informally in someone's head
A Data Protection Impact Assessment (DPIA) under GDPR and an AI Impact Assessment under ISO 42001 serve different purposes and are not interchangeable, though they can and should be coordinated.
How to Close It
Your AI impact assessment procedure should specify: 1. Trigger criteria — when is an assessment required? (e.g., new AI system deployment, significant model update, new use case, adverse event) 2. Assessment scope — who is affected, what harms are plausible, what is the severity and reversibility of those harms 3. Review cycle — assessments must be living documents, not one-time snapshots 4. Escalation path — what happens when an assessment identifies unacceptable residual risk
Action step: Pull the impact assessment for your highest-risk AI system. When was it last updated? Does it address all four elements above? If not, update it before your audit.
Gap #6: The Statement of Applicability Is Decorative, Not Operational (Clause 6.1.3)
What I See
ISO 42001 clause 6.1.3 requires organizations to produce a Statement of Applicability (SoA) — a document that identifies which Annex A controls are applicable, justifies inclusions and exclusions, and links each control to the organization's risk treatment plan. This is a document auditors scrutinize intensely.
The most common SoA failures:
- Controls are listed as "applicable" but there is no corresponding procedure, record, or evidence of implementation
- Controls are excluded without documented justification
- The SoA exists as a static spreadsheet with no version control or link to the risk register
An SoA that lists controls as implemented but cannot be traced to operational evidence is, in audit terms, worse than not having the control at all — because it suggests the organization doesn't understand its own AIMS.
How to Close It
For every control marked "applicable and implemented" in your SoA, you must be able to answer: - Where is the procedure? (Document reference) - Where is the evidence of implementation? (Record reference) - Who is responsible? (Named role)
Build a three-column traceability matrix linking your SoA controls to procedures and records. If a cell is empty, that control is not audit-ready.
Action step: Conduct an internal SoA traceability review at least 60 days before your Stage 2 audit. Sixty days gives you enough runway to close documentation gaps without rushing.
Gap #7: Management Review Is a Checkbox, Not a Decision Record (Clause 9.3)
What I See
Clause 9.3 of ISO 42001:2023 specifies the minimum inputs and outputs required for management review meetings. The inputs include AIMS performance data, audit results, risk status, and the achievement of AI objectives. The outputs must include decisions and actions related to continual improvement.
What I actually see in most organizations' management review records: a meeting agenda with checkboxes and a note that says "no issues identified." This is an automatic finding. An auditor reading those minutes cannot determine what data was reviewed, what decisions were made, or what actions were assigned.
How to Close It
Your management review records must show: - Specific data reviewed (not just "AI objectives reviewed" — cite the metrics and their current values) - Analysis and conclusions (what does the data mean for the AIMS?) - Explicit decisions (e.g., "Approved budget for expanded bias monitoring in Q3") - Assigned actions with owners and due dates
Action step: Look at your last management review record and ask: "Could an auditor who wasn't in the room reconstruct what was discussed and decided?" If the answer is no, reformat your management review template before your next meeting.
The Gap Severity Matrix: How Auditors Prioritize Findings
Not all gaps carry equal weight. Understanding how auditors classify findings helps you prioritize remediation efforts.
| Gap Type | Audit Classification | Certification Impact | Typical Remediation Time |
|---|---|---|---|
| Missing or inadequate AI policy (Cl. 5.2) | Major NC | Certificate withheld | 1–2 weeks |
| AI risk assessment not conducted (Cl. 6.1) | Major NC | Certificate withheld | 3–6 weeks |
| No measurable AI objectives (Cl. 6.2) | Major NC | Certificate withheld | 1–2 weeks |
| Competence records incomplete (Cl. 7.2) | Minor NC | Follow-up required | 1 week |
| SoA controls not implemented (Cl. 6.1.3) | Major NC | Certificate withheld | 4–8 weeks |
| Impact assessment outdated (Annex A.8) | Minor or Major NC | Depends on severity | 2–4 weeks |
| Management review records weak (Cl. 9.3) | Minor NC | Follow-up required | 1 week |
Major nonconformities must be closed — with objective evidence submitted — before a certificate can be issued. Minor nonconformities allow certification to proceed but require a corrective action plan with a defined closure date.
Your Pre-Audit Action Plan: A 90-Day Countdown
Based on my experience leading ISO 42001 gap assessments for organizations across industries, here is the remediation sequence I recommend:
Days 1–30: Documentation Sprint - Revise the AI policy (Clause 5.2) to meet all stated requirements - Update the SoA with full traceability to procedures and records - Refresh AI objectives with SMART metrics and named owners
Days 31–60: Process Hardening - Complete or update the AI risk assessment across all six risk dimensions - Update AI impact assessments for all in-scope systems - Build or update the competence matrix and close any training gaps
Days 61–90: Evidence and Rehearsal - Run a formal internal audit against ISO 42001:2023 clauses 4–10 - Conduct management review with a compliant record - Run mock competence interviews with key personnel - Brief top management on their role in the audit
Citation hook: Organizations that complete a structured internal audit at least 60 days before their Stage 2 assessment are statistically less likely to receive major nonconformities, because documentation gaps can be remediated before the external auditor arrives.
Why First-Time Pass Rates Matter More Than Ever
Fewer than 50% of organizations pursuing ISO 42001 certification pass their Stage 2 audit on the first attempt, according to early industry data from certification bodies operating in the standard's first two years. The primary reason isn't technical complexity — it's that organizations underestimate the documentation depth and cross-functional coordination the standard demands.
At Certify Consulting, our 100% first-time audit pass rate across 200+ clients isn't accidental. It's the result of a structured pre-audit remediation methodology that closes exactly the gaps described in this article — systematically, with enough lead time to generate the objective evidence auditors require.
One Principle That Ties It All Together
Every gap I've described shares a common root cause: treating ISO 42001 as a documentation exercise rather than an operational system. Auditors are trained to look through documents to the underlying reality. If your risk assessment doesn't reflect the AI systems you actually operate, they'll find out. If your training records show attendance but your employees can't answer basic policy questions, they'll find out.
The organizations that pass on the first attempt are the ones that build their AIMS to be real — and then document that reality. The documentation follows the system; it doesn't replace it.
Citation hook: ISO 42001:2023 is not a paper certification — it is a management system standard, meaning auditors evaluate whether the documented system matches the operated system, with objective evidence required for every claim.
If you're preparing for certification and want an expert second opinion on your readiness, review our ISO 42001 audit preparation services or reach out to us directly at Certify Consulting.
Last updated: 2026-03-16
Jared Clark
Certification Consultant
Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.