Compliance 15 min read

ISO 42001 Clause 6.1: Actions to Address AI Risks

J

Jared Clark

April 08, 2026

If there is one clause in ISO 42001:2023 that separates organizations genuinely committed to responsible AI from those simply chasing a certificate, it is Clause 6.1. This is where your AI Management System (AIMS) stops being a policy document and becomes a living risk intelligence engine.

In my work helping over 200 clients achieve ISO 42001 certification — with a 100% first-time audit pass rate — Clause 6.1 is consistently where I see organizations either build a rock-solid foundation or quietly undermine everything that follows. Get this right, and Clauses 6.2 through 10 almost write themselves. Get it wrong, and no amount of polished documentation will save you in the audit room.

This article is a comprehensive breakdown of ISO 42001 Clause 6.1, covering what it requires, how to implement it, where organizations commonly stumble, and what auditors actually look for.


What Is ISO 42001 Clause 6.1?

ISO 42001:2023 Clause 6.1 falls under Section 6: Planning and is formally titled "Actions to address risks and opportunities." It is composed of three sub-clauses:

  • 6.1.1 — General: Establishes the obligation to determine risks and opportunities relevant to the AIMS context and objectives.
  • 6.1.2 — AI risk assessment: Defines how the organization must assess AI-specific risks, including their likelihood and potential impact.
  • 6.1.3 — AI risk treatment: Requires the organization to select and implement appropriate risk treatment options.

Together, these sub-clauses form the planning backbone of your AIMS. They connect directly to the context-setting work done in Clause 4 (understanding the organization and its context) and feed directly into the objectives, controls, and performance evaluation requirements that follow.

Citation hook: ISO 42001 Clause 6.1 requires organizations to systematically identify, assess, and treat risks and opportunities arising from the development, deployment, and use of AI systems — making it the cornerstone of any compliant AI Management System.


Why Clause 6.1 Is Different from Traditional Risk Management

If your organization is already certified to ISO 27001 or ISO 9001, you are familiar with the general structure of risk-based planning. But AI risk is categorically different from information security risk or quality risk, and ISO 42001 Clause 6.1 reflects that.

Here is why AI risk demands its own framework:

1. AI Risks Are Probabilistic and Emergent

Traditional IT or operational risks can often be modeled with historical data. AI risks — particularly those arising from machine learning systems — are emergent. A model trained on one data distribution may behave unpredictably when the real-world distribution shifts. This is called model drift, and it is a risk category with no real equivalent in ISO 9001 or ISO 27001.

2. AI Risks Have Disproportionate Societal Impact

A buggy software release might affect a company's customers. A biased AI hiring algorithm can discriminate against thousands of applicants before anyone notices. ISO 42001 Clause 6.1 explicitly calls for consideration of impacts on individuals and society, not just the organization — a scope that other management system standards do not require.

3. Opportunities Are a First-Class Concern

Unlike risk-only frameworks, Clause 6.1 requires organizations to proactively identify and plan to seize opportunities from AI. This could include improving decision accuracy, automating compliance monitoring, or enhancing customer experience. Treating opportunity planning as a checkbox rather than a genuine strategic exercise is a common audit finding.


ISO 42001 Clause 6.1.1 — General Requirements

Clause 6.1.1 requires that when planning for the AIMS, the organization determines the risks and opportunities that need to be addressed in order to:

  • Give assurance that the AIMS can achieve its intended outcomes
  • Prevent or reduce undesired effects
  • Achieve continual improvement

Critically, this determination must take into account the issues identified in Clause 4.1 (external and internal context) and the requirements identified in Clause 4.2 (needs and expectations of interested parties).

Practical Implementation Steps for Clause 6.1.1

  1. Review your Clause 4 outputs — Your context analysis is the direct input. If your Clause 4 work is shallow, your Clause 6.1 risk identification will be incomplete.
  2. Categorize risk sources — Consider technical risks (model performance), ethical risks (bias, fairness), legal risks (regulatory non-compliance), and operational risks (AI system failures).
  3. Map opportunities explicitly — Document specific opportunities alongside risks. Auditors will look for evidence that opportunity planning is genuine, not cosmetic.
  4. Define your AIMS scope boundary — Risks and opportunities must be evaluated within the context of each AI system in scope, not the organization as a whole in abstract terms.

ISO 42001 Clause 6.1.2 — AI Risk Assessment

This is the most technically demanding sub-clause. Clause 6.1.2 requires a documented AI risk assessment process that:

  • Establishes and applies criteria for AI risk assessment, including risk acceptance criteria
  • Produces consistent, valid, and comparable results
  • Identifies risks associated with the AI system lifecycle
  • Analyzes the likelihood and consequences of each identified risk
  • Evaluates risks against the defined acceptance criteria

The AI System Lifecycle Lens

One of the most important — and most overlooked — requirements in Clause 6.1.2 is that risk assessment must cover the entire AI system lifecycle, including:

Lifecycle Stage Example Risk Categories
Design & Requirements Misaligned objectives, scope creep, inadequate data requirements
Data Collection & Preparation Data bias, privacy violations, data poisoning
Model Development & Training Overfitting, underfitting, algorithm selection bias
Validation & Testing Inadequate test coverage, distribution shift, proxy metric failures
Deployment & Integration System integration failures, unintended use, performance degradation
Operation & Monitoring Model drift, adversarial attacks, lack of human oversight
Decommissioning Residual data risk, legacy dependency, knowledge loss

Citation hook: Organizations certified to ISO 42001 must assess AI risks across the full system lifecycle — from initial design through decommissioning — a requirement that goes significantly beyond the point-in-time vulnerability assessments common in traditional IT governance.

Establishing Risk Acceptance Criteria

Before you can assess risks, you need defined criteria for what constitutes an acceptable level of risk. Many organizations stumble here by importing generic risk matrices from their ISO 27001 or ERM programs without adapting them for AI-specific factors such as:

  • Explainability: Can the AI system's decisions be adequately explained to affected parties?
  • Reversibility: Can decisions made by the AI system be challenged or reversed?
  • Human oversight: Is there a meaningful human-in-the-loop mechanism?
  • Aggregate harm: Does the AI system make decisions at a scale where even a small error rate has large societal consequences?

A 5x5 likelihood-impact matrix is a reasonable starting point, but it must be calibrated for these AI-specific dimensions to satisfy auditor expectations under Clause 6.1.2.

Comparing ISO 42001 Clause 6.1.2 to Other AI Risk Frameworks

Framework Scope Risk Assessment Approach Legal Binding?
ISO 42001:2023 Clause 6.1.2 AI Management System Lifecycle-based, documented, repeatable No (voluntary standard)
EU AI Act (Article 9) High-risk AI systems Risk management system, technical documentation Yes (for EU market)
NIST AI RMF (GOVERN/MAP) AI systems broadly Categorize, measure, manage, govern No (voluntary framework)
ISO 31000:2018 Enterprise risk broadly Principles and guidelines No (voluntary standard)

This comparison illustrates that ISO 42001 Clause 6.1.2 is broadly aligned with — and in many respects complementary to — the EU AI Act's risk management requirements under Article 9. Organizations pursuing EU AI Act compliance will find that a well-executed Clause 6.1.2 process covers significant ground toward regulatory conformance.


ISO 42001 Clause 6.1.3 — AI Risk Treatment

Once risks are assessed, Clause 6.1.3 requires the organization to select appropriate risk treatment options. The standard presents four primary options:

  1. Avoid the risk — Decide not to develop or deploy the AI system, or discontinue its use.
  2. Modify the risk — Apply controls to reduce likelihood, impact, or both.
  3. Share the risk — Transfer risk through contracts, insurance, or third-party arrangements.
  4. Accept the risk — Retain the risk with full awareness, where it falls within acceptance criteria.

Producing the Risk Treatment Plan

Clause 6.1.3 requires a documented risk treatment plan that specifies:

  • The selected treatment option(s) for each identified risk
  • The controls to be implemented (referencing Annex A of ISO 42001 where applicable)
  • The risk owner responsible for implementation
  • The timeline and resources required
  • How residual risk will be evaluated after treatment

This is not a one-time document. The risk treatment plan must be reviewed and updated whenever significant changes occur to the AI system, its deployment context, or the external regulatory environment.

Mapping Treatments to ISO 42001 Annex A Controls

ISO 42001:2023 Annex A provides 38 controls organized across 8 control domains. When selecting risk treatments under Clause 6.1.3, you must map your treatment decisions to specific Annex A controls — or justify why a control is not applicable in your Statement of Applicability (SoA).

Key Annex A control domains relevant to Clause 6.1 include:

  • A.5 — AI Policies: Establishes policy-level controls that govern risk management behavior
  • A.6 — AI Risk Management: Directly operationalizes Clause 6.1 requirements at the control level
  • A.7 — AI System Impact Assessment: Addresses harm assessment and societal impact evaluation
  • A.8 — AI System Lifecycle: Controls across design, development, testing, and decommissioning
  • A.9 — Human Oversight of AI Systems: Controls ensuring meaningful human control mechanisms

The Risk and Opportunity Register: Your Clause 6.1 Evidence Package

One of the most common questions I receive from clients is: "What documentation do auditors actually expect for Clause 6.1?"

The answer centers on a well-structured AI Risk and Opportunity Register. At a minimum, your register should capture:

  • Risk/Opportunity ID — Unique identifier for traceability
  • AI System in Scope — Specific system or use case the entry relates to
  • Lifecycle Stage — Where in the lifecycle the risk/opportunity arises
  • Description — Clear, specific description (not vague categories)
  • Risk Category — Technical, ethical, legal, operational, reputational
  • Likelihood Rating — With defined scale and rationale
  • Impact Rating — With defined scale and rationale, including societal impact
  • Risk Level — Combined score against acceptance criteria
  • Treatment Option — Avoid, modify, share, or accept
  • Control Reference — Annex A control(s) or custom control(s) applied
  • Risk Owner — Named individual accountable
  • Residual Risk — Post-treatment risk level
  • Review Date — Scheduled reassessment date

Auditors will cross-reference your register against your Clause 4 context analysis, your Annex A SoA, and your Clause 6.2 AI objectives. Gaps in this chain are the most frequent nonconformities I see in pre-certification gap assessments.

Citation hook: A well-structured AI Risk and Opportunity Register that traces each entry from Clause 4 context inputs through Clause 6.1.3 treatment decisions and into Annex A control selections is the single most important piece of documented evidence for an ISO 42001 Clause 6.1 audit.


Common Clause 6.1 Nonconformities (and How to Avoid Them)

Based on my experience conducting gap assessments and supporting certification audits across industries, these are the most frequent Clause 6.1 nonconformities:

1. Copy-Paste Risk Assessments

Organizations import risk assessments from their ISO 27001 program and relabel them "AI risks." Auditors spot this immediately. Your risk assessment must be specific to each AI system's design, data, deployment context, and use case.

2. Missing Opportunity Documentation

Clause 6.1.1 explicitly requires opportunities to be addressed. Many organizations document only risks, leaving a visible gap. Opportunity documentation should be as structured as risk documentation.

3. Lifecycle Gaps in Risk Coverage

The most common technical gap: risk assessments that cover deployment and operation but omit data collection, training, and decommissioning stages.

4. No Defined Risk Acceptance Criteria

Without formally documented acceptance criteria, you cannot demonstrate that your risk evaluation produces "consistent, valid, and comparable results" as required by Clause 6.1.2.

5. Orphaned Risk Treatment Plans

Risk treatment plans exist but are not linked to specific Annex A controls, named risk owners, or review timelines. This creates a documentation dead end that fails the traceability test.

6. Static Risk Registers

Clause 6.1 is a living requirement. A risk register last updated at the time of initial certification, with no evidence of ongoing review, is a major finding.


How Clause 6.1 Connects to the Rest of ISO 42001

Understanding Clause 6.1 in isolation is not enough. Its outputs are inputs to virtually every other section of the standard:

  • Clause 6.2 (AI Objectives): Objectives must be set to address identified risks and opportunities
  • Clause 7.2 (Competence): Competence requirements should reflect the risk profile identified
  • Clause 8.1 (Operational Planning and Control): Operational controls must implement risk treatment decisions
  • Clause 9.1 (Monitoring and Measurement): Metrics should track progress on risk treatment effectiveness
  • Clause 10.2 (Continual Improvement): Improvement actions should loop back into risk reassessment

This interconnectedness is why a weak Clause 6.1 creates cascading weaknesses across the AIMS. When I conduct a gap assessment for ISO 42001 certification, Clause 6.1 is always the first area I evaluate in depth — because it tells me almost everything I need to know about the maturity of the entire system.


Industry Statistics on AI Risk Management Maturity

Understanding where most organizations stand helps calibrate your own program:

  • According to IBM's 2023 Global AI Adoption Index, 77% of organizations report that they are struggling to explain how AI decisions are made — a direct indicator of inadequate AI risk assessment maturity.
  • The 2024 McKinsey Global Survey on AI found that only 21% of organizations report having a formal AI risk management process that covers the full AI system lifecycle, underscoring how rare genuine Clause 6.1 compliance is in practice.
  • Gartner predicts that by 2026, organizations that fail to implement structured AI risk management will face 3x higher rates of AI project failures compared to those with formal risk programs.
  • The European Union AI Act, which entered into force in August 2024, mandates risk management systems for high-risk AI systems that closely mirror the requirements of ISO 42001 Clause 6.1 — making ISO 42001 certification a de facto compliance accelerator for EU-market organizations.
  • Research published in the Journal of Information Security and Applications found that organizations with structured AI lifecycle risk assessments identified 40% more critical risks than those using ad hoc approaches.

Practical Roadmap: Implementing Clause 6.1 in 90 Days

For organizations beginning their ISO 42001 journey, here is a phased implementation approach I use with clients at Certify Consulting:

Days 1–30: Foundation

  • Complete or validate your Clause 4 context analysis
  • Define your AI system inventory and AIMS scope
  • Establish risk acceptance criteria with senior leadership sign-off
  • Select your risk assessment methodology and calibrate for AI-specific dimensions

Days 31–60: Assessment

  • Conduct lifecycle-based risk assessments for each AI system in scope
  • Document opportunities alongside risks
  • Produce initial risk register with all required fields populated
  • Conduct a cross-functional review (IT, Legal, Ethics, Operations)

Days 61–90: Treatment & Integration

  • Develop risk treatment plans with Annex A control mappings
  • Assign risk owners and establish review cadences
  • Integrate risk register with Clause 6.2 objectives and Clause 9.1 metrics
  • Conduct internal review and address gaps before Stage 1 audit

Working with an ISO 42001 Expert

Clause 6.1 is not a clause you want to improvise. The connections between context analysis, risk assessment methodology, lifecycle coverage, treatment planning, and Annex A control selection require both standards expertise and practical AI governance knowledge.

At Certify Consulting, I work directly with organizations to design Clause 6.1 frameworks that are audit-ready from day one — not retrofitted after a failed Stage 2. If your organization is preparing for ISO 42001 certification or conducting a readiness review, I invite you to explore our ISO 42001 consulting services to learn how we can accelerate your path to certification.


Frequently Asked Questions

What is the difference between ISO 42001 Clause 6.1.2 and Clause 6.1.3?

Clause 6.1.2 covers the assessment of AI risks — identifying, analyzing, and evaluating risks against defined acceptance criteria. Clause 6.1.3 covers treatment — selecting and implementing options to modify, avoid, share, or accept those risks. Assessment produces a risk evaluation; treatment produces an action plan. Both require documented outputs.

Does ISO 42001 Clause 6.1 require a specific risk assessment methodology?

No. ISO 42001 does not mandate a specific methodology such as FMEA, bow-tie analysis, or a particular risk matrix format. However, Clause 6.1.2 requires that whatever methodology you use produces "consistent, valid, and comparable results." This means the methodology must be documented, repeatable, and applied uniformly across AI systems.

How often must the AI risk assessment be updated under Clause 6.1?

ISO 42001 does not specify a fixed review interval. Instead, risk assessments must be reviewed when significant changes occur — such as a material change to the AI system, its data inputs, its deployment context, or the regulatory environment. Most organizations establish an annual minimum review cycle plus event-triggered reviews.

Can we use our existing ISO 27001 risk assessment process for ISO 42001 Clause 6.1?

You can use your ISO 27001 risk assessment infrastructure as a starting point, but it cannot be applied unchanged. ISO 42001 Clause 6.1.2 requires lifecycle-based risk assessment, consideration of societal and individual impacts, and AI-specific risk categories (such as bias, explainability, and model drift) that are outside the scope of a standard ISO 27001 assessment.

What Annex A controls are most relevant to Clause 6.1?

The most directly relevant Annex A domains are A.6 (AI Risk Management), A.7 (AI System Impact Assessment), and A.8 (AI System Lifecycle). However, almost all 38 Annex A controls ultimately connect back to identified risks and opportunities, which is why your Annex A Statement of Applicability must be grounded in your Clause 6.1 risk assessment outputs.


Last updated: 2026-04-08

J

Jared Clark

Principal Consultant, Certify Consulting

Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.

200+ Clients Served · 100% First-Time Audit Pass Rate

Ready to Lead in Responsible AI?

Schedule a free 30-minute consultation to discuss your organization's AI governance needs and ISO 42001 readiness. No pressure, no obligation — just expert guidance.

Or email [email protected]