Guide 14 min read

ISO 42001 Statement of Applicability: Which Controls Apply

J

Jared Clark

March 30, 2026

If there is one document that separates organizations that truly understand ISO 42001 from those who are just going through the motions, it is the Statement of Applicability (SoA). In every implementation I lead at Certify Consulting, the SoA is where the real strategic thinking happens — and where most first-time implementers either get stuck or get it wrong.

This guide will walk you through exactly what the Statement of Applicability is, why it matters, how to build one that will satisfy an auditor, and — most importantly — how to make objective, defensible decisions about which ISO 42001:2023 controls actually apply to your organization's AI systems.


What Is the ISO 42001 Statement of Applicability?

The Statement of Applicability is a mandatory documented output required by ISO 42001:2023 clause 6.1.3. It is a formal register that:

  1. Lists all controls from Annex A of ISO 42001:2023 (controls A.2 through A.10)
  2. Declares whether each control is applicable or not applicable to your AI management system (AIMS)
  3. Provides justification for each inclusion or exclusion decision
  4. Notes the implementation status of every applicable control

Think of it as the master blueprint of your AIMS. The SoA is cross-referenced throughout your audit, tying your risk assessment (clause 6.1.2), your AI policy, and your operational controls into a single coherent picture.

Citation hook: The ISO 42001 Statement of Applicability is a mandatory documented output under clause 6.1.3, requiring organizations to justify the inclusion or exclusion of every Annex A control based on risk assessment results, legal obligations, and contractual requirements.


Why the SoA Is the Most Audited Document in ISO 42001

Auditors love the SoA for one simple reason: it exposes gaps instantly. If your SoA says a control is not applicable but your risk register identifies a related risk, you have an immediate nonconformity. If your SoA says a control is applicable but you have no evidence of implementation, that is another finding.

According to the International Accreditation Forum (IAF), documentation nonconformities — including incomplete or unjustified SoAs — account for a significant portion of Stage 1 audit failures across management system standards. In my own practice across 200+ client engagements, I have never seen an organization pass a Stage 1 audit with an SoA that was copied from a template without customization. Every SoA must reflect the specific context, risks, and AI use cases of your organization.

The SoA also directly informs your AI risk treatment plan (clause 6.1.4), your operational controls (clause 8), and your performance evaluation (clause 9). It is not a one-time exercise — it must be reviewed whenever your AI systems, risk landscape, or regulatory obligations change.


The ISO 42001 Annex A Control Structure: What You Are Working With

Before you can complete your SoA, you need to understand the architecture of ISO 42001:2023 Annex A. The standard organizes its controls into eight thematic domains, spanning 38 individual controls:

Domain Annex Ref. Focus Area No. of Controls
Policies for AI A.2 AI governance policy framework 2
Internal organization A.3 Roles, responsibilities, accountability 3
Resources for AI systems A.4 Data, compute, human resources 4
Assessing impacts of AI systems A.5 Impact assessments, harm identification 4
AI system life cycle A.6 Design, development, deployment, retirement 9
Documentation A.7 Records, system documentation 2
Relationship with AI stakeholders A.8 Suppliers, partners, affected parties 3
AI system use A.9 Operator and user responsibilities 4
Third-party and supply chain A.10 Vendor AI governance, procurement 3

Understanding this structure is critical because your justification for excluding a control must be coherent within its domain. Excluding A.6 (AI system life cycle) controls, for example, requires an extremely compelling justification — if your organization develops or deploys AI, most of those controls will be applicable by default.


Step-by-Step: How to Build Your ISO 42001 Statement of Applicability

Step 1: Complete Your Organizational Context and Risk Assessment First

This is non-negotiable. The SoA is an output of your risk assessment process (clause 6.1.2), not a standalone exercise. Before you open a spreadsheet, you need:

  • A completed context analysis (clause 4.1) identifying internal and external issues
  • A defined scope (clause 4.3) specifying which AI systems are included in your AIMS
  • An AI risk assessment documenting identified risks, their likelihood, and potential impacts
  • A list of applicable legal, regulatory, and contractual obligations (including the EU AI Act, sector-specific regulations, and any customer requirements)

Without these inputs, your SoA decisions will be arbitrary — and auditors will see through it immediately.

Step 2: Create Your SoA Document Structure

Your SoA should be a structured document (most commonly a spreadsheet or controlled document) with at minimum these columns for each Annex A control:

Column Purpose
Control Reference e.g., A.6.1.2
Control Name e.g., "AI system design"
Applicable? (Yes/No) Your determination
Justification for Decision Narrative explanation
Implementation Status Not started / In progress / Implemented
Evidence Reference Pointer to supporting documentation
Responsible Owner Named individual or role

Adding an "Implementation Status" and "Evidence Reference" column transforms your SoA from a compliance checkbox into a living governance tool.

Step 3: Apply a Structured Decision Framework for Each Control

For each of the 38 Annex A controls, work through the following four-question decision framework:

Question 1: Does a relevant risk exist in your risk register that this control addresses? If yes, the control is almost certainly applicable. ISO 42001 is explicit: controls selected must align with identified risks.

Question 2: Is there a legal, regulatory, or contractual obligation that this control supports? For example, if you process personal data through an AI system, controls related to data governance (A.4.2) and impact assessment (A.5) are almost certainly mandated by GDPR, the EU AI Act, or sector-specific rules.

Question 3: Does your AIMS scope include the activities this control governs? If your scope excludes AI system development (you are a pure operator/deployer), certain A.6 development controls may legitimately be excluded — but you must document this scope rationale carefully.

Question 4: Would excluding this control create a material gap in your AI governance posture? This is the "reasonableness test." Even if a specific risk is not yet documented, if excluding a control would leave an obvious governance hole, include it with a forward-looking justification.

Citation hook: Organizations that scope out ISO 42001 controls solely to reduce workload — without documented risk justification — expose themselves to major nonconformities at Stage 2 audit, since every exclusion must be traceable to a specific scope boundary, absence of risk, or regulatory exemption.

Step 4: Write Meaningful Justifications

This is where most organizations fall short. Justifications like "not applicable to our business" or "low risk" will not survive auditor scrutiny. Strong justifications follow this formula:

[Control name] is [applicable/not applicable] because [specific organizational context, risk, or obligation] as documented in [referenced document, risk ID, or regulatory citation].

Example of a weak justification:

"A.5.2 – AI system impact assessment. Not applicable. We are a small company."

Example of a strong justification:

"A.5.2 – AI system impact assessment. Applicable. Our AI-powered credit scoring system (Scope Item AI-003) poses potential adverse impacts on financial inclusion for protected classes, as identified in Risk Register item R-014. DORA Article 28 and our contractual obligations with [Client X] require documented impact assessments prior to deployment. This control is implemented through our AIMS-IA-001 Impact Assessment Procedure."

The difference is night and day — and directly reflects the quality of your overall AIMS.

For each control marked "Applicable," your SoA should cross-reference a specific document, procedure, record, or system. This linkage is what makes your AIMS auditable. Controls without evidence pointers are controls waiting to become findings.

At Certify Consulting, we build what I call an AIMS Control Matrix alongside the SoA — a companion document that maps each applicable control to its procedure, owner, review cycle, and audit evidence type. This dramatically reduces the time spent preparing for surveillance audits.

Step 6: Obtain Senior Leadership Approval

Under ISO 42001 clause 5.1, top management is responsible for the AIMS. Your SoA must be formally approved by senior leadership — not just signed off by the IT team or a compliance analyst. This approval signals that the organization's risk decisions have been reviewed at the appropriate authority level.


Common Mistakes Organizations Make With the ISO 42001 SoA

Mistake 1: Copying an SoA Template Without Customization

Every AI system and every organization has a unique risk profile. A template is a starting point, not a deliverable. I have reviewed dozens of SoAs that were clearly lifted from generic templates — they contain justifications referencing activities the organization does not perform and exclude controls the organization clearly needs.

Mistake 2: Treating the SoA as a One-Time Document

ISO 42001:2023 clause 10.2 requires continual improvement, and clause 9.3 (management review) explicitly requires reviewing the AIMS for changes in context and risk. Your SoA must be version-controlled and reviewed at least annually, and whenever a material change occurs — new AI system deployment, regulatory update, significant incident.

Mistake 3: Excluding Controls Due to Scope Without Documenting the Scope Boundary

Some organizations legitimately exclude AI development controls because they only deploy third-party AI systems. That is acceptable — if the scope document clearly defines this boundary and the SoA references that scope definition. The exclusion itself is not the problem; the lack of an auditable paper trail is.

Mistake 4: Conflating "Not Currently Implemented" With "Not Applicable"

These are entirely different determinations. "Not applicable" means the control does not apply to your organization's AI context. "Not currently implemented" means it applies but you have not yet put the control in place. Marking unimplemented controls as "not applicable" is a critical audit finding and, in some cases, a potential ethics concern.

Mistake 5: Ignoring the EU AI Act and Other Regulatory Drivers

As of 2025, the EU AI Act imposes mandatory requirements on high-risk AI systems that map directly to several ISO 42001 Annex A controls — particularly in domains A.5 (impact assessment), A.6 (life cycle controls), and A.8 (stakeholder transparency). Organizations operating in the EU that exclude these controls without robust justification are not just risking an audit finding — they may be non-compliant with binding regulation.

Citation hook: The EU AI Act's requirements for high-risk AI systems — including conformity assessments, transparency obligations, and human oversight — directly correspond to ISO 42001:2023 Annex A domains A.5, A.6, and A.8, making these controls effectively mandatory for in-scope EU organizations regardless of internal risk appetite.


How Operator vs. Developer Status Affects Your SoA

One of the most nuanced aspects of ISO 42001 applicability decisions is the distinction between AI system developers (organizations that build AI models or systems) and AI system operators (organizations that deploy or use AI systems built by others).

ISO 42001 Annex B provides guidance on this, and it has significant implications for your SoA:

Control Domain AI Developer AI Operator Pure AI User
A.2 – AI Policies Applicable Applicable Applicable
A.3 – Internal Organization Applicable Applicable Applicable
A.4 – Resources Fully applicable Partially applicable Limited applicability
A.5 – Impact Assessment Fully applicable Applicable Applicable (scaled)
A.6 – AI Life Cycle Fully applicable Partially applicable Generally not applicable
A.7 – Documentation Applicable Applicable Applicable
A.8 – Stakeholder Relations Applicable Applicable Limited applicability
A.9 – AI System Use Applicable Fully applicable Fully applicable
A.10 – Third-Party/Supply Chain Applicable Applicable Applicable

This table is a general guide, not a substitute for your own risk-based analysis. A large operator deploying high-risk AI at scale may need to apply more A.6 controls than a small developer building a low-risk internal tool.


Maintaining Your SoA Through the AIMS Lifecycle

The SoA is not a certification artifact — it is a living governance document. Here is how to keep it current:

  • Annual review: Conduct a full SoA review as part of your management review (clause 9.3)
  • Triggered reviews: Any new AI system deployment, significant change to an existing system, regulatory update, or material incident should trigger an SoA review
  • Version control: Maintain a version history with dates of change, nature of change, and approver
  • Audit preparation: Before each surveillance audit, verify that every applicable control still has current, accessible evidence

At Certify Consulting, we recommend building SoA review into your AI change management procedure so that no new AI system goes live without a documented SoA impact assessment.


What Auditors Look for in an ISO 42001 SoA

Having guided clients through more than 200 certification engagements, I can tell you that auditors focus on three things when reviewing your SoA:

  1. Completeness: Are all 38 Annex A controls addressed? Missing controls are automatic findings.
  2. Traceability: Can every applicability decision be traced to your risk register, scope document, or regulatory obligations? Unsupported decisions are findings.
  3. Consistency: Do your applicable controls align with what you actually do? If your SoA says A.6.5 (AI system deployment) is not applicable but you clearly deploy AI systems, something is wrong.

A well-constructed SoA does not just help you pass an audit — it communicates to the auditor that your organization has genuinely internalized the standard and governs its AI systems with rigor and intentionality.


Getting Expert Help With Your ISO 42001 SoA

Building a defensible, customized Statement of Applicability is one of the highest-leverage activities in your ISO 42001 implementation. Done well, it sets the foundation for every other element of your AIMS. Done poorly, it creates a domino effect of nonconformities that can derail your certification timeline.

If your organization is beginning its ISO 42001 journey or preparing for a Stage 1 audit, explore our ISO 42001 implementation services to see how Certify Consulting can help you build an SoA that is tailored to your AI systems, defensible under scrutiny, and aligned with applicable regulations including the EU AI Act.


Frequently Asked Questions: ISO 42001 Statement of Applicability

Is the Statement of Applicability mandatory for ISO 42001 certification?

Yes. The SoA is an explicit mandatory documented output under ISO 42001:2023 clause 6.1.3. Certification bodies will request it during Stage 1 audit review, and its absence or inadequacy is a direct basis for audit failure.

Can I exclude controls from the ISO 42001 SoA?

Yes, but every exclusion must be justified. Acceptable justifications include: the control relates to activities outside your defined AIMS scope, no risk in your risk register maps to the control, or there is no legal or contractual obligation requiring the control. You cannot exclude a control simply because it is difficult or costly to implement.

How often should the ISO 42001 SoA be updated?

At minimum, the SoA should be reviewed annually as part of your management review. It should also be updated whenever a material change occurs — such as deploying a new AI system, a significant change to an existing system, a new regulatory requirement, or a material AI-related incident.

How does the EU AI Act affect ISO 42001 SoA decisions?

The EU AI Act imposes binding obligations on providers and deployers of high-risk AI systems that map directly to ISO 42001 Annex A controls, particularly in domains A.5, A.6, and A.8. For EU-regulated organizations, many of these controls are effectively mandatory, making it very difficult to justify their exclusion from the SoA.

What is the difference between "not applicable" and "not yet implemented" in an ISO 42001 SoA?

"Not applicable" means the control does not apply to your organization's AI context, scope, or risk profile. "Not yet implemented" means the control applies but has not yet been put in place. These must never be conflated — marking unimplemented controls as not applicable is a critical nonconformity and undermines the integrity of your AIMS.


Last updated: 2026-03-30

J

Jared Clark

Principal Consultant, Certify Consulting

Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.

200+ Clients Served · 100% First-Time Audit Pass Rate

Ready to Lead in Responsible AI?

Schedule a free 30-minute consultation to discuss your organization's AI governance needs and ISO 42001 readiness. No pressure, no obligation — just expert guidance.

Or email [email protected]