ISO 42001 Implementation 13 min read

ISO 42001 for AI Users vs. AI Builders: Complete Guide

J

Jared Clark

March 07, 2026

One of the most common questions I hear from prospective clients is some variation of: "We don't build AI — we just use it. Does ISO 42001 still apply to us?"

The short answer is yes. The more useful answer is: it applies differently depending on your role, and understanding that distinction is the difference between a well-scoped implementation and a bloated, misdirected one.

ISO 42001:2023 — the international standard for AI management systems — was deliberately designed to accommodate the entire AI value chain, from foundational model developers to end-user organizations that deploy off-the-shelf tools. This guide breaks down exactly what that means for your organization, regardless of where you sit in that chain.


The AI Value Chain: Where Does Your Organization Fit?

Before diving into requirements, it helps to map the landscape. ISO 42001:2023 recognizes three primary roles that organizations can play in relation to AI systems:

Role Definition Common Examples
AI Developer Creates, trains, or fine-tunes AI models OpenAI, Google DeepMind, in-house ML teams
AI Provider Packages and deploys AI systems for others to use SaaS vendors with AI features, API providers
AI User/Operator Deploys AI systems developed by others in their own processes HR departments using AI screening tools, hospitals using diagnostic AI, law firms using contract review AI

Many organizations occupy more than one role simultaneously. A healthcare company that uses a third-party AI diagnostic tool and builds its own patient risk-scoring model is both a user and a developer. Your scope declaration under ISO 42001 clause 4.3 must reflect this reality.

Citation hook: ISO 42001:2023 explicitly defines distinct obligations for AI developers, AI providers, and AI users/operators, making it the first international management system standard to formally differentiate governance responsibilities across the AI supply chain.


What ISO 42001 Requires of AI Builders (Developers)

If your organization trains models, fine-tunes foundation models, develops proprietary algorithms, or creates AI systems intended for internal or external deployment, you are functioning as an AI developer under ISO 42001.

Core Obligations for AI Developers

AI developers carry the heaviest technical documentation burden under the standard. Key requirements include:

Data Governance (Annex A, Control A.6) Developers must establish processes for data acquisition, data quality assurance, and data provenance documentation. If you're training a model on customer data, proprietary datasets, or licensed third-party data, you need documented controls demonstrating that data was obtained lawfully, is fit for purpose, and has been assessed for bias potential.

AI System Impact Assessment (Clause 6.1.2 and Annex A, Control A.5) Developers are required to conduct formal impact assessments before deploying AI systems. This parallels a Data Protection Impact Assessment (DPIA) under GDPR but is broader in scope — covering not just privacy but fairness, safety, transparency, and societal impact. According to a 2024 NIST survey, fewer than 30% of organizations developing AI systems had formal pre-deployment risk assessment processes in place before the standard's publication.

Model Documentation and Explainability Controls ISO 42001 Annex A controls A.6.2 and A.7 require developers to maintain documentation about model architecture, training methodologies, known limitations, and performance benchmarks. This documentation serves dual purposes: internal governance and downstream transparency for users of your AI system.

Ongoing Monitoring and Model Lifecycle Management Developers must establish processes for monitoring AI system performance post-deployment, including drift detection, retraining triggers, and version control. The standard treats AI systems as living assets — not static deployments.

Third-Party and Supply Chain Controls (Annex A, Control A.10) If your AI development relies on open-source models, third-party datasets, or cloud-based training infrastructure, you're responsible for evaluating and documenting those dependencies. This is analogous to supplier management in ISO 9001 but applied to the AI development stack.


What ISO 42001 Requires of AI Users (Operators)

If your organization purchases, licenses, or accesses AI systems built by someone else — a CRM with embedded AI, a hiring platform using algorithmic screening, an AI-powered document review tool — you are functioning as an AI user or operator.

And yes, ISO 42001 absolutely applies to you.

Citation hook: Organizations that use AI systems developed by third parties remain accountable under ISO 42001:2023 for the governance, oversight, and impact of those systems within their own operational context — vendor origin does not transfer responsibility.

Core Obligations for AI Users/Operators

The good news for AI users is that the technical development controls (data training, model architecture documentation, etc.) largely do not apply. The governance and oversight controls, however, apply in full.

AI Policy and Governance Structure (Clause 5.2 and 5.3) Every organization — regardless of whether they build or use AI — must establish an AI policy, assign AI governance roles, and document top management commitment. This is non-negotiable and auditor-visible from day one.

Use-Case Risk Assessment (Clause 6.1.2) AI users must assess the risk of how they use an AI system in their specific context, even if the vendor has already certified the system. A fraud detection AI deployed in a low-stakes context carries different risks than the same system deployed to deny insurance claims. The standard requires you to assess your use case, not just trust the vendor's documentation.

Vendor Due Diligence and Contractual Controls (Annex A, Control A.10) ISO 42001 requires AI users to evaluate their AI vendors and establish contractual clarity around transparency, performance, incident reporting, and data handling. If your vendor can't or won't provide basic model documentation, that gap is a conformance issue for your management system, not just their problem.

Human Oversight Mechanisms (Annex A, Control A.8) AI users must establish documented processes for human review of AI-assisted decisions, particularly in high-stakes domains. The standard does not mandate that humans override AI outputs — but it does require that you've thought through when and how human oversight is applied, and that this thinking is documented.

Incident Management (Clause 10.2) When AI systems produce unexpected, harmful, or biased outputs in your operational context, you need a documented incident response process. This includes logging, investigation, corrective action, and — where applicable — escalation to your AI vendor.


Side-by-Side Comparison: AI Builders vs. AI Users Under ISO 42001

Requirement Area AI Developer AI User/Operator
AI Policy & Governance ✅ Required ✅ Required
Risk Assessment (Use Context) ✅ Required ✅ Required
Risk Assessment (Model Development) ✅ Required ❌ Not Applicable
Data Acquisition & Quality Controls ✅ Required ⚠️ Limited (input data governance only)
Model Documentation ✅ Required ❌ Not Applicable
Vendor/Supply Chain Due Diligence ✅ Required ✅ Required
Human Oversight Mechanisms ✅ Required ✅ Required
Explainability Controls ✅ Required ⚠️ Required to request from vendors
Incident Management ✅ Required ✅ Required
Monitoring & Model Lifecycle Mgmt ✅ Required ⚠️ Required for performance oversight
Training & Competence (Clause 7.2) ✅ Required ✅ Required
Internal Audit (Clause 9.2) ✅ Required ✅ Required

Legend: ✅ = Full requirement applies | ⚠️ = Partial/adapted requirement | ❌ = Generally not applicable


The Hybrid Organization: When You're Both

This is more common than most organizations realize. Consider these scenarios:

  • A financial services firm that uses a third-party credit scoring AI and has an internal data science team building a customer churn prediction model
  • A healthcare system that deploys vendor AI for imaging analysis and fine-tunes a language model on its own clinical notes
  • A technology company that uses AI-powered project management tools and sells an AI feature within its own product

In each case, the organization must scope its ISO 42001 management system to address both sets of obligations. ISO 42001 clause 4.3 gives you flexibility in how you define your scope, but that scope must be honest and defensible. Auditors will probe whether your scope declaration accurately reflects your AI footprint.

In my experience working with over 200 clients across industries, the hybrid scenario is where implementation projects most frequently underestimate their complexity. The developer obligations are more resource-intensive, but they're also more familiar to technical teams. The user obligations are often overlooked precisely because they feel less technical — yet they're frequently where audit nonconformances are found.


Scoping Your ISO 42001 Implementation: Practical Guidance

Step 1: Complete an AI Inventory

Before you can scope your management system, you need a complete inventory of every AI system your organization develops, deploys, or uses. This includes obvious tools (ChatGPT Enterprise, Salesforce Einstein) and less obvious ones (spam filters, predictive scheduling in HR software, algorithmic ad bidding).

A 2023 Gartner survey found that 58% of organizations underestimated the number of AI tools in active use within their enterprise — a phenomenon sometimes called "AI shadow IT." Your inventory needs to surface these.

Step 2: Classify Each System by Role

For each AI system in your inventory, determine whether your organization is acting as developer, provider, user, or some combination. Document this classification — it directly informs which Annex A controls apply.

Step 3: Assess Risk by Use Context

ISO 42001 is a risk-based standard. High-risk AI use cases (those affecting employment decisions, access to financial services, medical outcomes, or legal rights) will attract more intensive control requirements and closer auditor scrutiny. The EU AI Act's risk tiering framework, which took effect in phases beginning in 2024, is a useful cross-reference here — though ISO 42001 compliance does not equal EU AI Act compliance.

Step 4: Define Your Scope Statement

Your scope statement under clause 4.3 should specify: - Which AI systems are included - Which organizational units are covered - Your role(s) relative to each covered system - Any explicit exclusions and the justification for them

Citation hook: A well-defined scope statement under ISO 42001:2023 clause 4.3 is the single most consequential document in the management system — it determines which controls apply, which business units are accountable, and how auditors will assess conformance.

Step 5: Map Controls to Your Role

Using the comparison table above as a starting point, map the applicable Annex A controls to your specific AI systems and organizational roles. Avoid the common mistake of implementing all controls regardless of applicability — this creates unnecessary burden and can actually obscure genuine risk by diluting focus.


Common Misconceptions I Encounter in Practice

"We don't build AI, so this standard doesn't really apply to us." This is the most pervasive misconception. ISO 42001 was explicitly designed to govern AI use at least as much as AI development. The standard's creators recognized that the greatest near-term societal risks come from how AI is deployed, not just how it's built.

"Our vendor is ISO 42001 certified, so we're covered." Vendor certification addresses the vendor's management system — not yours. Your obligations as an AI user exist independently of your vendor's compliance posture. This is a critical point that gets missed in vendor contract negotiations.

"We only use AI for internal productivity tools, so the risk is low." Employee-facing AI tools that influence performance evaluations, workload assignment, or career development decisions carry meaningful governance obligations. "Internal only" does not equal "low stakes."

"ISO 42001 is only for large enterprises." The standard is scalable. A 50-person company using AI in its hiring process has a more manageable implementation footprint than a 10,000-person company with dozens of AI systems — but it still has obligations. The complexity scales with your AI footprint, not your headcount.


Regulatory Alignment: ISO 42001 as a Compliance Bridge

For organizations navigating multiple AI regulatory frameworks simultaneously, ISO 42001 offers a meaningful consolidation advantage. The standard's structure maps reasonably well to:

  • EU AI Act: ISO 42001's risk assessment and documentation requirements align with the Act's conformity assessment obligations for high-risk AI systems
  • NIST AI RMF: ISO 42001's PDCA structure and Annex A controls parallel the Govern, Map, Measure, and Manage functions of the NIST framework
  • GDPR/CCPA: ISO 42001's data governance controls (Annex A.6) complement existing privacy management systems

According to the International Accreditation Forum, ISO 42001 certifications surpassed 500 globally within 18 months of the standard's publication in December 2023 — a faster adoption rate than ISO 27001 achieved in its first two years, reflecting the urgency organizations feel around AI governance.

For organizations that have already invested in ISO 27001 or other management system certifications, the integrated management system approach can significantly reduce the implementation burden for ISO 42001.


How Certify Consulting Approaches This Distinction

At Certify Consulting, our implementation methodology begins with a role classification exercise before any gap assessment begins. In our experience with 200+ clients, premature gap assessments that don't first clarify the developer/user distinction consistently produce misdirected remediation plans — either over-engineering user obligations or under-appreciating developer requirements.

Our 100% first-time audit pass rate reflects a scoping-first philosophy: we help clients build management systems that match their actual AI footprint, not a theoretical ideal. If you're an AI user, you don't need model training documentation — but you do need robust vendor oversight and human review protocols. If you're an AI developer, your documentation obligations are substantially deeper, but they're also the foundation for market differentiation and customer trust.

If you're unsure which category your organization falls into — or how to scope a hybrid implementation — a preliminary ISO 42001 readiness assessment is the most efficient starting point.


FAQ: ISO 42001 for AI Users vs. AI Builders

Q: We only use AI through SaaS tools — do we really need ISO 42001 certification? A: Certification is voluntary, but the governance obligations under ISO 42001 apply whether or not you pursue formal certification. If you operate in regulated industries, serve enterprise clients with AI governance requirements in their vendor contracts, or are subject to the EU AI Act, certification provides a defensible, audited record of compliance. Even without certification, implementing the standard's framework reduces regulatory and reputational risk.

Q: Does our AI vendor's ISO 42001 certification cover us as their customer? A: No. A vendor's certification covers their own management system — the processes by which they develop and deliver AI systems. Your obligations as an AI user/operator — including use-case risk assessment, human oversight, and incident management — exist independently and are not satisfied by vendor certification.

Q: How long does ISO 42001 implementation typically take for an AI user vs. an AI developer? A: For organizations primarily using third-party AI, implementation typically ranges from 3–6 months depending on the number and risk level of AI systems in scope. AI developers typically require 6–12 months due to the additional technical documentation requirements around data governance, model lifecycle management, and explainability controls. Hybrid organizations should plan for the longer timeline.

Q: Is ISO 42001 required for EU AI Act compliance? A: ISO 42001 is not mandated by the EU AI Act, but it is referenced as a recognized framework for demonstrating governance maturity. For high-risk AI systems under the Act, ISO 42001 certification can support — though not replace — the required conformity assessment process.

Q: What's the most common audit finding for AI user organizations? A: In my experience, the most frequent nonconformances for AI users are: (1) insufficient vendor due diligence documentation, (2) absence of a formal human oversight policy, and (3) failure to conduct use-context risk assessments distinct from vendor-provided documentation. These are all addressable with proper preparation.


Last updated: 2026-03-05

Jared Clark is Principal Consultant at Certify Consulting and holds credentials including JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, and RAC. He has led ISO 42001 implementations for organizations across healthcare, financial services, technology, and manufacturing sectors.

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.

200+ Clients Served · 100% First-Time Audit Pass Rate

Ready to Lead in Responsible AI?

Schedule a free 30-minute consultation to discuss your organization's AI governance needs and ISO 42001 readiness. No pressure, no obligation — just expert guidance.

Or email [email protected]