If there is one thing I've learned across 200+ ISO 42001 engagements, it's this: organizations that struggle in audits almost never fail because of a bad policy or a missing record. They fail because someone on the team didn't truly understand what a term meant — and built an entire process around the wrong definition.
ISO 42001:2023 is built on a precise, interlocking vocabulary. Terms like "AI system," "AI risk," and "intended use" aren't interchangeable with how those words appear in everyday conversation. They carry specific, standard-defined meanings that directly shape what you're required to document, govern, and control.
This glossary is the reference I wish every client had before their first gap assessment. I've organized it into thematic clusters — not alphabetically — because terms make far more sense when you understand how they relate to each other.
Citation hook: ISO 42001:2023 is the first internationally recognized standard for AI management systems, published by the International Organization for Standardization in December 2023, and its terminology draws from both ISO/IEC 22989 (AI concepts) and the broader ISO management system family.
Why ISO 42001 Terminology Matters for Certification
ISO 42001 uses defined terms precisely because AI is a field where language is still being standardized across industries and regulators. According to the ISO/IEC JTC 1/SC 42 committee — the body responsible for AI standards — consistent terminology is foundational to interoperability between organizations, regulators, and auditors.
Organizations that misuse key ISO 42001 terms risk: - Writing nonconforming policies that satisfy no auditor - Scoping their AIMS too narrowly (or too broadly) and triggering major nonconformities - Misclassifying AI risk levels, leading to under-controlled high-impact systems - Failing to satisfy clause 4.3 (scope determination) or clause 6.1.2 (AI risk assessment)
A 2023 survey by the AI governance firm BABL AI found that over 60% of organizations preparing for AI audits reported uncertainty about the definition of "AI system" as it applies to their technology stack — a single definitional gap with cascading compliance consequences.
Section 1: Foundational AI Concepts
These terms define what is being governed under ISO 42001.
AI System
Standard definition (ISO/IEC 22989): An engineered system that generates outputs such as predictions, recommendations, decisions, or content that influences real or virtual environments.
An AI system is the core unit of governance under ISO 42001. Not every software tool qualifies. A rule-based automation script with no learning capability is generally not an AI system under the standard. A machine learning model that generates predictions from training data is.
Practical tip: Map every tool in your technology stack against this definition before drafting your AIMS scope. I've seen organizations inadvertently exclude large language models from their scope because they called them "search tools."
AI Model
Definition: A mathematical construct trained on data to produce outputs. The AI model is the component within an AI system that performs the learning or inference function.
The distinction between an AI model and an AI system matters enormously. Your AIMS governs the system (including its deployment context, interfaces, and human oversight mechanisms), not just the underlying model.
Machine Learning (ML)
Definition: A subset of AI in which a system improves its performance on tasks through experience (training on data) without being explicitly programmed for each scenario.
ISO 42001 doesn't restrict itself to ML-based systems, but most AI systems organizations govern under it are ML-based. Understanding whether your AI system uses supervised, unsupervised, or reinforcement learning informs your bias and accuracy risk controls.
Generative AI
Definition: A class of AI systems capable of generating new content — text, images, audio, code, or other outputs — typically based on large foundation models trained on broad datasets.
Generative AI is not explicitly defined in ISO 42001:2023 (the standard predates the mainstream explosion of LLMs), but it falls squarely within the "AI system" definition. The ISO/IEC 42105 guidance note, published to supplement 42001, specifically addresses generative AI governance considerations.
Section 2: Management System Terms
These terms come from the ISO High Level Structure (HLS) shared across ISO 9001, ISO 27001, and ISO 42001. If you're already certified to another ISO standard, many of these will be familiar — but their AI-specific application has nuances.
AI Management System (AIMS)
Definition: The management system an organization uses to establish policies, objectives, processes, and controls for the responsible development, deployment, and use of AI.
The AIMS is the totality of what ISO 42001 requires you to build and maintain. It is not a software platform. It is a documented, governed framework — think policies, risk registers, roles, training records, audit programs, and continual improvement mechanisms.
Interested Party (Stakeholder)
Definition (ISO 42001 clause 4.2): A person or organization that can affect, be affected by, or perceive itself to be affected by a decision or activity related to the AIMS.
For AI systems, this definition is expansive. It includes: - Employees who use or are affected by AI-generated decisions - Customers whose data trains the model - Regulators (e.g., the EU AI Act supervisory authorities) - Third-party AI vendors and supply chain partners
Clause 4.2 requires you to identify interested parties and understand their needs and expectations. This is the foundation of your stakeholder analysis.
Scope (of the AIMS)
Definition (clause 4.3): The boundaries and applicability of the AI management system, including which AI systems, organizational units, and geographies are covered.
Scope is one of the most consequential decisions in your certification journey. Too narrow, and you exclude material AI risks. Too broad, and you create an unmanageable compliance burden. I always advise clients to tie scope to their AI system inventory — not their org chart.
Policy
Definition (clause 5.2): A statement of intent and direction related to AI management, formally authorized by top management.
Your AI policy must address the organization's commitment to responsible AI, ethical principles, and compliance with applicable legal requirements. It is not a technical specification — it is a governance commitment document.
Objective
Definition (clause 6.2): Measurable results the organization intends to achieve through its AIMS.
AI objectives must be measurable, monitored, communicated, and updated. Examples include: reducing AI-related bias incidents by X%, achieving 95% model accuracy thresholds, or completing AI impact assessments for 100% of new deployments.
Section 3: AI Risk and Impact Terms
This cluster covers the language of risk management under ISO 42001 — the area where most organizations need the most definitional precision.
AI Risk
Definition: The combination of the probability of an AI-related harm occurring and the severity of that harm.
ISO 42001 clause 6.1.2 requires a formal AI risk assessment process. "Risk" here is not purely technical (model failure) — it encompasses societal, ethical, legal, and reputational dimensions. A model that works accurately but produces biased outcomes still represents an AI risk.
Citation hook: ISO 42001:2023 clause 6.1.2 requires organizations to identify AI risks by considering the characteristics of the AI system, its intended use, potential for misuse, and the severity and reversibility of potential harms — making it one of the most multidimensional risk assessment requirements in any ISO management system standard.
AI Impact Assessment
Definition: A structured evaluation of the potential effects — positive and negative — that an AI system may have on individuals, groups, organizations, or society.
ISO 42001 references impact assessment as a key control (Annex A, control A.6.1). It is distinct from a risk assessment: a risk assessment identifies what could go wrong; an impact assessment evaluates what the system will do to the world around it, including under normal operation.
Harm
Definition: Adverse impact on people, organizations, or society resulting from the operation or outputs of an AI system.
Harms under ISO 42001 include physical harm, psychological harm, financial harm, discrimination, privacy violations, and societal harm. The standard requires you to assess harms across affected groups — not just the direct user.
Intended Use
Definition: The use of an AI system as specified by the developer or deployer, including the intended operating conditions, target users, and purposes.
"Intended use" is a borrowing from the medical device and product safety world (it appears in ISO 14971). In ISO 42001, it anchors your risk assessment: risks are evaluated relative to intended use. A facial recognition system intended for law enforcement carries different risk profiles than one intended for consumer photo organization.
Reasonably Foreseeable Misuse
Definition: A use of the AI system that is not intended by the developer but is predictable based on human behavior or system design characteristics.
This is a critical concept. Your risk controls must account not just for intended use but for how the system could be misused — even if you've explicitly prohibited that use. ISO 42001 clause 6.1.2 and Annex A control A.6.2 both implicitly require this analysis.
Bias (AI Bias)
Definition: Systematic and unfair discrimination in AI outputs, typically resulting from biased training data, flawed model design, or inappropriate use of the system.
ISO 42001 requires controls to identify and mitigate bias. Annex A control A.6.7 addresses fairness and bias explicitly. Note: not all statistical bias is ethically problematic — the standard is focused on unfair discrimination that causes harm.
Section 4: Organizational and Accountability Terms
Top Management
Definition (clause 5.1): The person or group of people who directs and controls the organization at the highest level.
ISO 42001 places significant obligations on top management — including demonstrating leadership, ensuring resources, and establishing AI policy. In my experience, the most common gap I see at the management layer is delegation without accountability: executives sign off on AI policy but have no mechanism to verify its implementation.
Roles, Responsibilities, and Authorities
Definition (clause 5.3): The documented assignment of who is responsible for what within the AIMS, including AI-specific roles.
ISO 42001 doesn't mandate specific job titles, but Annex A control A.2.2 recommends assigning an AI lead or equivalent. Many organizations map this to a Chief AI Officer (CAIO), a Data & AI Ethics board, or an AI Risk Committee.
Competence
Definition (clause 7.2): The ability to apply knowledge and skills to achieve intended results within the AIMS.
Competence under ISO 42001 extends to technical AI literacy (understanding model behavior), ethical reasoning (applying the organization's AI principles), and regulatory awareness (knowing applicable law). You must demonstrate competence through records — training certificates, credentials, or demonstrated experience.
Section 5: Lifecycle and Supply Chain Terms
AI System Lifecycle
Definition: The full sequence of stages an AI system passes through, from initial concept and design through deployment, operation, monitoring, and decommissioning.
ISO 42001 Annex A organizes many of its controls around lifecycle stages. This is deliberate: risks and responsibilities shift as a system moves from development to production to retirement. A model that was acceptable at deployment may require re-evaluation after data drift is detected.
AI Supply Chain
Definition: The network of organizations, processes, and resources involved in developing, providing, and maintaining AI systems and their components — including data providers, model developers, and infrastructure vendors.
Most organizations today are deployers, not developers, of AI. They acquire AI capabilities from vendors (OpenAI, AWS, Google, etc.). ISO 42001 clause 8.4 (Externally provided processes, products, and services) requires you to govern your AI supply chain — including due diligence on vendor AI practices, contractual controls, and ongoing monitoring.
Data Governance
Definition: The policies, processes, and controls that manage data quality, integrity, privacy, and appropriate use — particularly as it relates to training, validating, and operating AI systems.
Data governance is a foundational control area under ISO 42001 Annex A (controls A.8.1–A.8.4). Poor data governance is the leading root cause of AI model failures, according to a 2024 Gartner report estimating that 85% of AI projects fail to deliver expected outcomes due to data quality issues.
ISO 42001 Terms at a Glance: Quick Reference Table
| Term | Standard Reference | Plain-Language Meaning |
|---|---|---|
| AI System | ISO/IEC 22989 | Any engineered system that generates outputs influencing the real world via AI techniques |
| AIMS | ISO 42001:2023 | The full management framework for governing AI across your organization |
| Intended Use | Clause 6.1.2 / Annex A | The authorized purpose and context for operating a specific AI system |
| AI Risk | Clause 6.1.2 | Probability × severity of harm arising from AI system operation |
| Reasonably Foreseeable Misuse | Clause 6.1.2 | Predictable off-label or adversarial uses that must be risk-assessed |
| Interested Party | Clause 4.2 | Any stakeholder who affects or is affected by your AIMS |
| AI Impact Assessment | Annex A, A.6.1 | Structured evaluation of an AI system's effects on people and society |
| Harm | Annex A, A.6.2 | Adverse outcome — physical, psychological, financial, discriminatory — from AI |
| AI Bias | Annex A, A.6.7 | Systematic unfair discrimination in AI outputs |
| Competence | Clause 7.2 | Documented ability to perform AIMS responsibilities effectively |
| AI Supply Chain | Clause 8.4 | Vendor and partner network involved in AI development or delivery |
| AI System Lifecycle | Annex A | Full cradle-to-grave stages: design → deployment → monitoring → decommissioning |
| Continual Improvement | Clause 10.2 | Ongoing cycle of identifying and acting on opportunities to enhance the AIMS |
| Top Management | Clause 5.1 | Highest-level leadership accountable for AIMS governance |
| Scope | Clause 4.3 | Defined boundaries of what your AIMS covers |
How These Terms Connect: The ISO 42001 Governance Chain
Understanding individual terms is the first step. Understanding how they interconnect is what separates organizations that sail through certification from those that receive major nonconformities.
Here's the governance chain I walk every client through:
- Top management sets the AI policy and authorizes the scope of the AIMS
- Interested parties (stakeholders) inform the context and requirements the AIMS must address (clause 4.2)
- AI system inventory within the scope triggers AI risk assessments (clause 6.1.2) and AI impact assessments (Annex A, A.6.1)
- Risk and impact assessments consider intended use, reasonably foreseeable misuse, harm, and bias
- Controls from Annex A are selected and implemented to treat identified risks
- Competence requirements ensure people with AIMS responsibilities can fulfill them
- AI supply chain governance extends controls to external vendors
- The full AI system lifecycle is governed — not just the deployment moment
- Continual improvement (clause 10.2) closes the loop, using audits and performance data to refine the system
Citation hook: The ISO 42001:2023 governance chain — from top management policy through AI risk assessment, Annex A controls, and continual improvement — mirrors the Plan-Do-Check-Act (PDCA) cycle used across all ISO High Level Structure management systems, ensuring that AI governance is not a one-time exercise but an ongoing organizational discipline.
Common Terminology Mistakes I See in Audits
After 8+ years of AI governance work and a 100% first-time audit pass rate across my client portfolio at Certify Consulting, the definitional mistakes I see most often include:
- Conflating "AI system" with "AI model" — leading to scope gaps where the deployment infrastructure and human oversight mechanisms go unaddressed
- Defining "intended use" too narrowly — excluding use cases that are clearly foreseeable based on actual user behavior
- Treating "bias" as purely a data science problem — rather than a governance and ethics problem requiring documented controls and accountability
- Ignoring "reasonably foreseeable misuse" — organizations document only authorized uses and are blindsided when auditors probe adversarial scenarios
- Misapplying "interested party" — limiting the definition to direct users and forgetting affected communities, regulators, and supply chain partners
If your team is unsure about any of these distinctions before your certification audit, it's worth getting an expert review. Our ISO 42001 gap assessment service is specifically designed to catch these definitional gaps before an auditor does.
Building Your Internal ISO 42001 Glossary
One of the first deliverables I produce for clients is an organization-specific AI glossary — a document that maps ISO 42001 standard terms to the specific tools, systems, and processes used inside their organization.
For example: "AI system" in the standard maps to "the customer churn prediction model deployed in Salesforce" and "the document review tool used by legal." Making those mappings explicit prevents the ambiguity that causes audit failures.
Your internal glossary should: - Reference the ISO 42001:2023 clause or Annex A control where each term appears - Provide a plain-language definition alongside the standard definition - Include examples from your actual AI system inventory - Be reviewed at least annually and updated when new AI systems are onboarded
For further reading on building your AIMS from the ground up, see our ISO 42001 implementation guide.
Conclusion
ISO 42001:2023 is a standard built on precision. Every term in its vocabulary — from "AI system" to "reasonably foreseeable misuse" to "interested party" — carries a specific meaning that shapes your compliance obligations. Organizations that invest time in mastering this terminology before building their AIMS build stronger, more auditable systems.
As a consultant, I'll tell you plainly: the gap between a first-time audit pass and a string of nonconformities often comes down to whether the team used these terms correctly when designing their policies, risk processes, and controls.
If you'd like help ensuring your organization's AI governance framework uses the right language and meets the right requirements, Certify Consulting offers AIMS gap assessments, implementation support, and pre-audit readiness reviews.
Last updated: 2026-03-22
Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC is the Principal Consultant at Certify Consulting and has guided 200+ organizations through ISO compliance programs with a 100% first-time audit pass rate.
Jared Clark
Principal Consultant, Certify Consulting
Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.