Every ISO 42001 implementation I've led across 200+ client engagements has started with the same uncomfortable question: "Do you actually know what AI systems your organization is running right now?"
The answer is almost always some variation of "mostly." A data science team has a few models in production. Marketing deployed a generative AI tool last quarter. IT is piloting an AI-driven ticketing system. Legal is quietly using an AI contract reviewer. And somewhere in Finance, a spreadsheet model with an embedded ML component has been running for two years without anyone formally acknowledging it as AI.
This isn't negligence — it's the natural consequence of how rapidly AI has been adopted across every business function. But when it comes to ISO 42001:2023 compliance, "mostly" isn't good enough. An AI inventory — a structured, living catalog of every AI system your organization develops, deploys, or uses — is the single most foundational artifact in any ISO 42001 management system. Without it, you cannot conduct meaningful risk assessments, assign accountability, demonstrate governance, or satisfy an auditor.
This article provides a complete, practical framework for building your AI inventory from the ground up.
Why the AI Inventory Is Non-Negotiable Under ISO 42001
ISO 42001:2023 doesn't use the term "AI inventory" in isolation, but its requirements make one structurally inevitable. Clause 6.1.2 requires organizations to identify risks associated with AI systems. Clause 5.3 requires roles and responsibilities to be assigned. Clause 8.1 requires operational planning and control of AI-related processes. Clause 9.1 requires performance evaluation. You cannot satisfy any of these clauses without first knowing what AI systems exist.
Think of the AI inventory the way a traditional IT department thinks of a Configuration Management Database (CMDB). It's not the end goal — it's the enabling infrastructure for everything else. In fact, according to a 2024 McKinsey survey, 74% of organizations that have deployed AI at scale report having experienced at least one AI-related incident caused by a system that wasn't formally tracked or governed. That statistic should set off alarm bells for any compliance leader.
The regulatory environment reinforces this urgency. The EU AI Act, which became enforceable in phases beginning in 2024, requires organizations to maintain technical documentation for each AI system placed on the market or put into service. Organizations that align their AI inventory structure with both ISO 42001 and EU AI Act requirements reduce documentation duplication by an estimated 40–60%, according to industry analysis. Building your inventory once — and building it right — pays dividends across multiple compliance frameworks.
What Qualifies as an "AI System" for Inventory Purposes?
Before you can build a complete inventory, your team needs a shared, unambiguous definition of what counts as an AI system. This is where many organizations stumble. They either cast the net too wide (inventorying every formula in Excel) or too narrow (only listing the three models the data science team formally deployed).
ISO 42001:2023 adopts the definition from ISO/IEC 22989, which defines an AI system as: "An engineered system that generates outputs such as content, forecasts, recommendations, or decisions for a given set of objectives."
For practical inventory purposes, I recommend applying a three-part test:
- Does the system use a trained model, algorithm, or rule set derived from data? (Rather than purely deterministic, hard-coded logic)
- Does its output influence a decision, action, or communication? (Even indirectly, e.g., a recommendation a human reviews before acting)
- Is it used repeatedly as part of a business process? (Not a one-off analysis)
If the answer to all three is yes, it belongs in your inventory. Common categories that organizations undercount include:
- Third-party SaaS tools with embedded AI (e.g., Salesforce Einstein, Workday's AI features, HubSpot's predictive lead scoring)
- AI APIs consumed by internal applications (e.g., OpenAI, Google Vertex AI, AWS Bedrock integrations)
- Legacy statistical models still in production (logistic regression models built in 2017 that still approve or deny customer requests)
- Robotic Process Automation (RPA) with AI components (document classification, intelligent data extraction)
- AI features within productivity tools (Microsoft Copilot, Google Workspace AI features used for business processes)
The 8 Core Data Fields Every AI Inventory Must Capture
The depth of your inventory directly determines the quality of your downstream risk management. Based on my work across regulated industries — including financial services, healthcare, and manufacturing — I've identified eight fields that every AI inventory record must contain to support ISO 42001 compliance.
| Field | Description | Why It Matters for ISO 42001 |
|---|---|---|
| System ID | Unique identifier (e.g., AI-2024-001) | Traceability across all documents and assessments |
| System Name & Description | Plain-language name and what the system does | Scope clarity; auditor communication |
| Business Owner | Named individual accountable for the system | Clause 5.3 (roles and responsibilities) |
| Technical Owner | Named individual responsible for maintenance | Clause 8.1 (operational control) |
| AI Risk Classification | Minimal / Limited / High / Unacceptable per EU AI Act tiers | Clause 6.1.2 (risk assessment input) |
| Data Inputs & Sources | What data the system consumes, including PII flags | Privacy and bias risk assessment |
| Decision/Output Type | What the system produces and how it's used | Impact assessment; human oversight level |
| Third-Party or In-House | Whether the system is vendor-supplied or internally built | Clause 8.4 (use of external AI systems) |
These eight fields form the minimum viable inventory record. As your AIMS (AI Management System) matures, you'll extend the inventory to include fields for model version history, retraining schedules, bias testing results, incident history, and audit findings linked to specific systems.
Step-by-Step: How to Build Your AI Inventory
Step 1: Establish Scope and Ownership (Week 1–2)
Begin by defining the organizational scope of your AIMS — which business units, geographies, and operations are included. This mirrors the scope-setting exercise required under ISO 42001 clause 4.3. Once scope is set, appoint an AI Inventory Owner — typically someone in IT Governance, Risk, or a dedicated AI Ethics function — who is accountable for maintaining the registry's accuracy and completeness.
Critically, this person does not need to be the technical expert on every AI system. Their job is governance and coordination, not model development.
Step 2: Issue a Discovery Survey Across Business Units (Week 2–4)
The most effective discovery mechanism I've used is a structured self-declaration survey issued to every business unit leader and IT function head. The survey asks three things:
- What AI systems or tools does your team currently use?
- What AI features are enabled in software your team licenses?
- Are there any models, algorithms, or AI-driven automations your team has built or commissioned in the last three years?
Pair the survey with a brief 30-minute briefing explaining what qualifies as AI for inventory purposes — using the three-part test above. Without this briefing, survey response quality degrades significantly. In my experience, organizations that skip the briefing miss an average of 35–40% of their actual AI footprint in the initial discovery pass.
Step 3: Supplement with Technical Discovery (Week 3–5)
Survey-based discovery alone will always have gaps. Complement it with a technical discovery pass:
- Cloud infrastructure audit: Review your AWS, Azure, or GCP environments for active AI/ML service invocations, SageMaker endpoints, Azure AI Services calls, or Vertex AI deployments.
- Software licensing review: Cross-reference your software asset management tool against known AI-enabled SaaS platforms.
- API traffic analysis: Review API gateway logs for calls to known AI providers (OpenAI, Anthropic, Cohere, Google AI, etc.).
- Data pipeline review: Audit ETL and data pipeline outputs to identify feeds into model-serving endpoints.
This technical pass is particularly important for organizations with decentralized IT environments where shadow AI adoption is common. A 2023 Gartner report found that 41% of employees have used AI tools at work without explicit approval from IT or management, a figure that underscores the real risk of incomplete inventories.
Step 4: Classify Each System by Risk Tier (Week 5–6)
Once the initial inventory is populated, each system must be classified by risk. ISO 42001 clause 6.1.2 requires a risk assessment process, and the EU AI Act provides a globally recognized tiering framework that integrates cleanly:
- Unacceptable Risk: AI systems that pose a clear threat to safety, livelihoods, or rights (prohibited under EU AI Act Article 5)
- High Risk: AI used in critical infrastructure, employment decisions, credit scoring, law enforcement, education, and similar domains (subject to strict conformity assessment)
- Limited Risk: AI with specific transparency obligations (chatbots, deepfakes)
- Minimal Risk: Low-risk applications (spam filters, AI-enabled video games, basic recommendation engines)
Your inventory should capture this classification for every system. For in-house and custom-built AI, classification requires a structured impact assessment. For third-party tools, your vendor management process (ISO 42001 clause 8.4) should require vendors to provide their own EU AI Act classification as part of procurement due diligence.
Step 5: Assign Accountability and Human Oversight Level (Week 6–7)
For each inventoried system, document two accountability dimensions:
- Organizational accountability: The named business owner responsible for the system's outputs and their business consequences.
- Human oversight level: One of three levels — Full Automation (system acts without human review), Human-in-the-Loop (human reviews before action), or Human-on-the-Loop (human can intervene but doesn't review every output).
Human oversight level is a direct input into your risk treatment decisions under ISO 42001. High-risk AI systems operating at Full Automation will require risk treatment — either increasing the oversight level, implementing additional controls, or both.
Step 6: Integrate the Inventory into Your Document Control System (Week 7–8)
Your AI inventory is a controlled document under ISO 42001 clause 7.5. It must be:
- Version-controlled with a clear change history
- Reviewed and approved by a designated authority (e.g., AI Governance Committee or CISO)
- Accessible to those who need it (auditors, risk owners, compliance team)
- Protected from unauthorized modification
Most organizations house the inventory in a GRC platform, a SharePoint-based document management system, or a dedicated AI governance tool. The format matters less than the governance around it. A well-governed spreadsheet outperforms a poorly governed SaaS tool every time.
Step 7: Establish a Maintenance Cadence (Ongoing)
An AI inventory that isn't maintained is worse than useless — it creates a false sense of completeness. Establish three maintenance mechanisms:
- Event-triggered updates: Any time a new AI system is procured, deployed, or decommissioned, the inventory must be updated within a defined SLA (I recommend 10 business days).
- Quarterly self-declaration refresh: Re-issue a lightweight version of the discovery survey to business unit leaders to catch anything new.
- Annual full review: A comprehensive audit of the entire inventory, including validation that all entries are still accurate and active.
Common Pitfalls to Avoid
Pitfall 1: Treating the inventory as a one-time project. AI proliferates continuously. Organizations that build a great inventory in Year 1 and neglect it in Year 2 face non-conformances in their surveillance audits. Build the maintenance cadence before you finish the initial build.
Pitfall 2: Confusing the inventory with a risk register. The inventory is a catalog of what exists. The risk register documents the risks associated with what exists. They are linked but distinct. Conflating them creates a document that does neither job well.
Pitfall 3: Excluding vendor AI. ISO 42001 clause 8.4 explicitly addresses the use of AI systems from external providers. If you use a third-party AI tool, it belongs in your inventory. You may not be responsible for the model itself, but you are responsible for how it's used in your context.
Pitfall 4: Over-engineering the initial build. The goal of your first inventory pass is completeness, not perfection. An inventory with 40 systems at five fields each is far more valuable than an inventory with 10 systems at 50 fields each. Start lean, expand over time.
How the AI Inventory Connects to the Broader AIMS
The AI inventory doesn't stand alone — it serves as the spine of your entire AI Management System. Here's how it connects to other key AIMS components:
| AIMS Component | Dependency on AI Inventory |
|---|---|
| Risk Assessment (Clause 6.1.2) | Inventory defines the scope and subjects of every risk assessment |
| Roles & Responsibilities (Clause 5.3) | Inventory identifies which systems need assigned owners |
| Supplier Management (Clause 8.4) | Inventory flags third-party AI systems requiring vendor oversight |
| Incident Management (Clause 10.2) | Incidents must be linked to a specific inventoried system |
| Performance Monitoring (Clause 9.1) | KPIs and monitoring schedules are set at the system level |
| Internal Audit (Clause 9.2) | Audit scope is drawn from the inventory |
| Management Review (Clause 9.3) | Summary statistics from the inventory inform leadership decisions |
Every major process in ISO 42001 traces back to a specific AI system — and that AI system must be in your inventory. This is why I tell every client: build the inventory first, build everything else second.
Getting External Help: When to Bring in a Consultant
Building a first-generation AI inventory is straightforward in concept but operationally demanding. It requires cross-functional coordination, technical discovery skills, regulatory knowledge, and document governance — all at the same time. For organizations without dedicated AI governance staff, this is often the point where implementation stalls.
At Certify Consulting, we typically complete a full AI inventory build — survey design, facilitated discovery workshops, technical review, risk classification, and document control setup — in four to six weeks for mid-sized organizations. Our 100% first-time audit pass rate across more than 200 client engagements is built on the foundation of getting this step right before moving to anything else.
If you're building your AIMS from the ground up, our ISO 42001 implementation services include a structured AI inventory methodology that has been validated across regulated industries including financial services, healthcare, and technology.
Summary: Your AI Inventory Action Plan
- Define what counts as AI using the three-part test
- Appoint an AI Inventory Owner before you begin discovery
- Issue a structured discovery survey to all business unit leaders
- Supplement with technical discovery via cloud, licensing, and API audits
- Classify each system by risk tier using the EU AI Act framework
- Assign named owners and human oversight levels to every system
- Place the inventory under document control per ISO 42001 clause 7.5
- Establish event-triggered, quarterly, and annual maintenance cycles
The AI inventory is not the most glamorous artifact in an AIMS — but it is unquestionably the most important. Organizations that invest the time to build it thoroughly, govern it rigorously, and maintain it continuously create a compliance infrastructure that scales with their AI ambitions rather than chasing after them.
Last updated: 2026-04-05
Jared Clark
Principal Consultant, Certify Consulting
Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements.