Most organizations treat AI governance as a deployment problem. Get the model tested, get legal sign-off, push it live, and move on. The governance team declares victory. The risk register gets filed. And then nobody looks at the system again until something goes wrong.
That framing is wrong — and ISO 42001 is built around correcting it. The standard does not treat AI governance as a point-in-time approval gate. It treats it as a lifecycle responsibility that begins before a single line of code is written and continues until a system is fully decommissioned, its data disposed of, and its records closed out.
This matters because AI systems do not behave like traditional software. They drift. They degrade. Their outputs shift as the world changes around them, even when no one touches the underlying code. A model that performed well at deployment can cause real harm twelve months later — not because someone changed it, but because the environment it was designed for changed. ISO 42001 was written with that reality in mind.
This article walks through the full AI system lifecycle as ISO 42001 defines it — all seven stages from inception through retirement — and explains what your AI Management System (AIMS) must document, control, and evaluate at each one. It draws on ISO/IEC 22989:2022, the foundational AI concepts standard that ISO 42001 references, and on the clause and Annex A requirements that govern each stage.
Why AI Lifecycle Management Is Different
Traditional software has a predictable character: you define requirements, write code, test it, ship it, and patch bugs as they surface. The behavior of a deployed application is essentially fixed until someone modifies the source. The same input produces the same output, reliably and indefinitely.
AI systems work differently. A trained model encodes statistical patterns from historical data. When the real world diverges from those patterns — because user behavior shifts, because data distributions change, because the population the model was trained on no longer matches the population it serves — the model's outputs degrade without any modification to the system itself. This phenomenon is called model drift, and it is not an edge case. It is a predictable feature of any model deployed in a changing environment.
Beyond drift, AI systems carry risks at each lifecycle stage that traditional software governance is simply not designed to address: training data bias baked in at development, emergent behaviors that only appear at scale, outputs that cannot be audited the way deterministic code can, and third-party supply chain risks when models are sourced from external providers.
ISO/IEC 22989:2022, the AI concepts and terminology standard that ISO 42001 uses as its normative reference, defines a seven-stage lifecycle model specifically to capture these dynamics. ISO 42001 builds its operational requirements on that model. The implication is clear: an AIMS that only governs AI at deployment is incomplete by definition.
| Dimension | Traditional Software | AI System |
|---|---|---|
| Behavior after deployment | Fixed until modified | Can drift without modification |
| Input/output relationship | Deterministic | Probabilistic |
| Risk profile over time | Stable | Evolves as data distributions shift |
| Primary governance concern | Correctness at release | Ongoing performance and impact |
| Retirement complexity | Shutdown and archive | Data disposal, model purge, residual risk |
The ISO 42001 Lifecycle Framework: Seven Stages
ISO 42001:2023 Clause 8.5 requires organizations to establish controls across the complete AI system lifecycle. Combined with ISO/IEC 22989:2022's lifecycle model, this produces seven distinct stages that your AIMS must address:
- Inception — Scoping and risk framing before development begins
- Design and Development — Architecture decisions, data selection, model building
- Verification and Validation — Confirming the model was built correctly and solves the right problem
- Deployment — Controlled release into production
- Operation and Monitoring — Ongoing performance surveillance and incident management
- Re-evaluation — Triggered or scheduled reassessment of risk, performance, and continued suitability
- Retirement and Decommissioning — Planned, documented system shutdown
The primary operational requirements sit in Clause 8 (Operation), but the lifecycle is supported by Clause 6 (Planning — risk assessment and treatment), Clause 9 (Performance Evaluation — monitoring and review), and Clause 10 (Improvement — corrective action and continual improvement). Annex A controls, particularly A.5 (Assessing Impacts) and A.6 (AI System Lifecycle), provide the specific control objectives that map to each stage.
Stage 1: Inception — Defining Scope and Risk Before You Build
The inception stage is where most governance programs fail before they even start. An organization decides to deploy an AI system — a hiring algorithm, a predictive maintenance model, a clinical decision support tool — and the first question asked is "how do we build it?" The governance question — "should we build it, and under what constraints?" — comes later, if at all.
ISO 42001 inverts that sequence. Clause 4 requires organizations to understand their context and the interests of affected parties before the AIMS can operate. Clause 6.1.2 requires a formal AI risk assessment process that identifies potential harms to individuals, groups, and society. Both of these apply at inception — before design begins.
Practically, this means the inception stage must produce documented answers to four questions:
- What is the intended purpose and scope of this AI system?
- Who are the affected stakeholders — including people who will be subject to the system's outputs, not just its operators?
- What is the risk profile? What harms could this system cause, and how severe and probable are those harms?
- Does this system belong in the AI inventory (also called the AI register)? Under ISO 42001, every AI system within the AIMS scope must be catalogued with its purpose, risk classification, data sources, and responsible owner.
The most common mistake at this stage is beginning development without any documented risk assessment. When an auditor asks for the risk documentation that justified a deployment decision, "we didn't have that process yet" is a major nonconformity against Clause 6.1.2 — regardless of how well the system performs.
Stages 2 & 3: Design, Development, Verification and Validation
These two stages are treated together here because in practice they run in parallel and share a common governance burden: building evidence that the system was constructed responsibly and works as claimed.
Design and Development Controls
Annex A.6 (AI System Lifecycle) contains the controls that govern how AI systems are designed and built. The key obligations at this stage are:
Architecture decisions must be documented, including the rationale for model type selection and the known limitations of the chosen approach. If a model is inherently difficult to explain — a deep neural network, for example — that opacity itself is a risk that must be assessed and treated.
Annex A.7 (Data for AI Systems) governs training data. This includes data quality requirements, data provenance documentation, and bias analysis. If a training dataset over-represents or under-represents a particular demographic, that imbalance must be identified, documented, and addressed — or accepted as a known risk with a documented treatment decision. "We didn't check" is not a treatment option.
Bias testing is an explicit expectation at this stage. ISO 42001 does not define a single testing methodology, but it does require that testing protocols are documented, executed, and recorded — and that results are retained as evidence.
Verification Versus Validation
ISO 42001 distinguishes between verification (did we build the system correctly according to its specifications?) and validation (does the system actually solve the problem it was designed to solve, for the people it was designed to serve?). Both are required.
Verification is largely a technical exercise: did the model achieve its performance targets on the test dataset? Validation is broader: does the model perform acceptably across subgroups? Does it behave as expected in edge cases? Does it produce results that real users can act on responsibly?
Model cards — structured documentation of a model's intended use, performance characteristics, limitations, and bias evaluation results — are the practical output that captures both. An auditor reviewing your Clause 8 evidence will expect to see model cards or equivalent documentation for every system in scope.
Stage gates between development, V&V, and deployment must be defined. What must be true before a model is allowed to proceed to deployment? Who has authority to make that call? These decisions belong in documented procedures, not in informal conversations between engineers.
Stage 4: Deployment — The Controlled Handoff
Deployment is the stage most organizations have a process for — even if that process is informal. ISO 42001 formalizes it, and the bar is higher than most teams expect.
Clause 8.4 requires a formal AI system impact assessment before any new AI system is deployed, or before significant changes are made to an existing one. This assessment must evaluate potential effects on individuals, groups, and society — analogous in structure to a GDPR Data Protection Impact Assessment, but scoped specifically to AI-related harms. No pre-deployment impact assessment, no compliant deployment.
Beyond the impact assessment, a deployment decision should produce a documented memo or record covering: the scope and purpose of the deployment, the results of the V&V stage, the outcome of the impact assessment, who authorized the deployment, any conditions or constraints on use, and the monitoring plan that will govern the system in production.
The question of who authorizes deployment matters. ISO 42001 does not mandate a specific governance structure, but it does require that roles and responsibilities are documented (Clause 5.3). In practice, organizations pursuing certification typically designate an AI governance board, an AI lead, and a compliance sign-off authority. At minimum, the authorization must come from someone with the standing to accept the residual risk.
Staged rollouts — releasing to a limited population before full deployment — are good practice and align with the standard's spirit, but they must be treated as formal deployment events with their own documentation, not informal "pilots" that bypass governance controls. Integration with existing IT change management processes is also expected, particularly for organizations already certified under ISO 27001 or operating under ITIL frameworks.
Stage 5: Operation and Monitoring — Where Most Programs Break Down
This is the stage where ISO 42001 diverges most sharply from traditional software governance — and where the majority of AI governance programs fall apart. Deployment is treated as the finish line. Monitoring is treated as an afterthought. Six months in, nobody is watching.
ISO 42001 treats monitoring as a core operational requirement, not an optional best practice. Clause 9.1 requires organizations to determine what to monitor, how to monitor it, when to analyze results, and who is responsible. For AI systems specifically, this translates to a defined set of performance metrics, behavioral indicators, and incident tracking mechanisms that must be actively maintained.
Model Drift Detection
Model drift takes two forms. Data drift occurs when the statistical distribution of inputs shifts away from the training distribution — the model encounters inputs it was not trained to handle well. Concept drift occurs when the relationship between inputs and the correct outputs changes — the world has moved, but the model has not. Both require detection mechanisms: statistical tests on incoming data distributions, regular performance benchmarks against labeled ground truth, and anomaly detection on output distributions.
Behavioral Monitoring and Incident Management
Annex A.8 (Use of AI Systems) requires organizations to control how AI systems are actually used — ensuring they are not applied outside their approved scope or to populations they were not validated for. Monitoring must include incident logging: when a system produces an unexpected or potentially harmful output, that event must be recorded, investigated, and resolved through the corrective action process under Clause 10.2.
Alert thresholds should be defined in advance, not decided reactively after something goes wrong. What level of performance degradation triggers a review? What type of incident triggers an immediate suspension? These thresholds belong in documented operating procedures.
The practical recommendation here is to build a Model Operations (ModelOps) cadence into your AIMS from the start: a scheduled review cycle — monthly for high-risk systems, quarterly for lower-risk ones — where performance data is reviewed, incident reports are assessed, and a documented determination is made about whether the system remains within acceptable operating parameters.
Stage 6: Re-evaluation — The Continuous Improvement Engine
Re-evaluation is the formal loop that connects operational experience back to governance decisions. It is governed primarily by Clause 10 (Improvement) and the Plan-Do-Check-Act (PDCA) cycle that runs through every ISO management system standard.
Re-evaluation can be triggered by a specific event — a significant incident, a material change in the model's operating environment, a regulatory update, or a threshold breach identified in monitoring. It can also be periodic, scheduled as part of the AIMS calendar regardless of whether any specific trigger has occurred. Both forms are legitimate; both are expected.
The distinction that matters most at this stage is between re-evaluation that results in minor adjustments and re-evaluation that determines a system needs to be substantially retrained or redesigned. The latter is not a simple tuning exercise — it effectively restarts the lifecycle from Stage 2, with all the documentation and governance controls that entails. Organizations often resist treating retraining as a new lifecycle event because it feels bureaucratic. That resistance is exactly the kind of governance failure ISO 42001 is designed to prevent.
Risk register updates are mandatory outputs of re-evaluation. The risks associated with a deployed AI system are not static — they change as the model's behavior evolves, as the population it serves changes, and as the regulatory and social context shifts. A risk register that was accurate at deployment but has never been updated is not a governance artifact; it is a false sense of security.
Stage 7: Decommissioning — The Stage Every Organization Skips
If there is one stage where AI governance programs are most systematically unprepared, it is retirement. Organizations plan carefully for deployment. They rarely plan for the end.
The consequences of unplanned decommissioning are not trivial. Training data used to build the model may be subject to retention obligations under GDPR or other data protection laws — or conversely, to deletion obligations under the right to erasure. Model artifacts stored in archives can be retrieved and redeployed without proper controls. Access permissions granted during operation may persist after the system is offline. Output logs may contain personal data that must be managed under data minimization principles.
ISO 42001 Clause 8.5 requires lifecycle controls that extend through decommissioning. The specific requirements at this stage include:
- Documented decommissioning decision: Who decided to retire the system, when, and why? This record must exist.
- Data disposition plan: What happens to training data, operational logs, and model artifacts? Which data must be retained (and for how long), and which must be deleted?
- Access revocation: All system access, API connections, and integration points must be formally closed. Undocumented integrations that continue to call a decommissioned model are a compliance risk.
- Stakeholder notification: Affected parties — users, downstream systems, third-party providers — must be informed of the retirement.
- GDPR and data protection intersections: If the system processed personal data, a data protection review of the decommissioning plan is typically required, covering both the right to erasure and data minimization obligations.
The practical tool for managing this stage is a decommissioning checklist — a documented procedure attached to each AI system's record in the AI inventory that specifies what must happen, in what order, before a system can be considered fully retired. Without it, decommissioning happens informally, incompletely, and without evidence.
Governance Infrastructure That Spans All Seven Stages
No individual lifecycle stage is governable in isolation. The infrastructure that makes lifecycle management possible must exist across all seven stages simultaneously.
The AI inventory (or AI register) is the master record for every AI system within the AIMS scope. It must be maintained continuously, updated at every lifecycle stage gate, and reflect the current status of each system. An inventory last updated at deployment is not a governance tool — it is a historical document.
Roles and responsibilities must be assigned for each system across its full lifecycle: an AI owner who is accountable for governance from inception through decommissioning, a data steward responsible for training and operational data, a compliance officer who tracks regulatory requirements, and a risk owner who maintains the risk register. Siloed ownership — where the development team owns the model through deployment and nobody owns it afterward — is the structural failure that most commonly produces Stage 5 and Stage 7 breakdowns.
The Statement of Applicability (SoA) must map each Annex A control to the lifecycle stages where it applies. This mapping is not just an audit artifact — it is a practical tool for determining which controls must be active at any given point in a system's life.
Change control is the final cross-cutting requirement. Model updates, retraining events, and version changes are not routine software patches. Each one must be assessed for its impact on the system's risk profile, its performance characteristics, and its compliance with the original impact assessment. A model that has been retrained substantially on new data is, in meaningful governance terms, a different model — and must be treated accordingly.
Common Lifecycle Governance Failures
Across the organizations we work with at Certify Consulting, the same patterns of lifecycle governance failure appear repeatedly. They are worth naming explicitly:
- Skipping the inception risk assessment. Development starts before any governance structure is in place. By the time compliance enters the conversation, the architecture is fixed and the training data is already selected.
- Treating deployment as the finish line. Post-deployment monitoring is assigned to no one in particular, executed sporadically if at all, and never integrated into the AIMS review cycle.
- No documented decommissioning plan. Systems are switched off informally. Data is left wherever it was. Nobody knows what happened to the training corpus.
- Siloed lifecycle ownership. The development team, compliance team, and operations team each own different parts of the lifecycle with no single owner accountable for the whole thing.
- Monitoring that lapses after six months. Initial enthusiasm produces a well-designed monitoring dashboard. Within half a year, nobody is looking at it and the alert thresholds have never been reviewed.
How to Build Your AI Lifecycle Management Program
If you are building an AI lifecycle management program from scratch or closing gaps in an existing one, the path forward is straightforward:
- Inventory all current AI systems and map their lifecycle stage. You cannot govern what you have not catalogued. Start with a complete register of every AI system in scope, and for each one, determine where it currently sits in the seven-stage lifecycle.
- Identify lifecycle gaps using the ISO 42001 requirements. For each system, assess which stage-gate documentation exists and which is missing. A system in Stage 5 with no monitoring plan and no impact assessment on record has known gaps against Clauses 8.4 and 9.1.
- Build documentation templates for each stage gate. Standardize what must be produced at each transition: an inception risk brief, an impact assessment form, a deployment authorization record, a monitoring plan, a re-evaluation report, and a decommissioning checklist.
- Assign lifecycle owners for every system. One person — not a committee, not a team — must be accountable for each system's governance from inception through retirement.
- Schedule re-evaluation cycles and decommissioning reviews. Put dates on the calendar. High-risk systems should have quarterly review triggers. Every system should have an annual decommissioning review that asks: does this system still belong in production?
If you are preparing for ISO 42001 certification, a lifecycle gap assessment is one of the most valuable pre-audit investments you can make. For support building your lifecycle governance program, contact Jared Clark at Certify Consulting for a complimentary consultation scoped to your organization's AI portfolio.
Conclusion
ISO 42001 treats AI governance as a permanent operational responsibility — not a certification exercise you complete once and file away. The standard's lifecycle framework makes a claim that is both simple and demanding: the hardest governance work begins after deployment, not before it. A model that went live without incident can still cause harm a year later. A system that was well-governed at launch can drift into compliance failure if no one is watching.
Organizations that govern AI well understand this. They build lifecycle infrastructure before they need it. They assign owners, set monitoring cadences, and plan for decommissioning on day one. They treat re-evaluation as a management discipline, not an interruption.
If your current AI governance program treats deployment as the end of the story, it has a structural gap that no amount of policy documentation will close. Lifecycle management is not a feature of a mature AIMS — it is the foundation of one.
To discuss where your organization stands and what a compliant lifecycle management program would look like for your AI portfolio, schedule a free consultation with Jared Clark.
Last updated: April 1, 2026
Jared Clark
Principal Consultant, Certify Consulting
Jared Clark is the founder of Certify Consulting, helping organizations achieve and maintain compliance with international standards and regulatory requirements. He has guided 200+ clients through AI governance engagements and maintains a 100% first-time audit pass rate across ISO 42001 implementations.