Your Company Uses AI — But Who's Responsible When It Goes Wrong?
By Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC | Principal Consultant, Certify Consulting
Every week I talk to executives who are enthusiastic about deploying AI — and quietly terrified about what happens when it fails. That fear is well-founded. According to a 2024 Gartner report, 85% of AI projects either fail to deliver expected outcomes or create unintended negative consequences. Yet in most organizations I encounter, there is no single person, team, or documented process that definitively owns AI risk.
That gap isn't just an operational inconvenience. It's a legal and reputational liability that regulators are increasingly willing to exploit.
The EU AI Act, which began phased enforcement in 2024 and reaches full applicability for high-risk AI systems in August 2026, imposes fines of up to €30 million or 6% of global annual turnover — whichever is higher — for violations involving prohibited AI practices. The U.S. is moving in the same direction, with the FTC, EEOC, and CFPB all issuing AI-specific enforcement guidance. The question is no longer whether someone will be held accountable when your AI goes wrong. The question is who — and whether your organization has answered that question before a regulator does it for you.
This guide lays out a practical, standards-aligned framework for assigning AI accountability in your organization, with specific reference to ISO 42001:2023, the international standard for AI management systems.
Why AI Accountability Is Uniquely Complicated
Traditional risk ownership is relatively straightforward. If a product fails, the product team owns it. If a contract goes sideways, legal owns it. AI breaks that model in three important ways.
1. AI decisions are often opaque. Even engineers who built the system may not be able to explain why a specific output was generated. This makes post-incident attribution genuinely difficult.
2. AI systems span multiple functions. A single AI deployment typically involves IT (infrastructure), data science (model development), legal (compliance), HR or operations (end users), procurement (third-party vendors), and senior leadership (strategic authorization). When something goes wrong, every one of those groups can point at another.
3. AI harm can be diffuse and delayed. A biased hiring algorithm may discriminate against thousands of applicants over months before anyone notices. Unlike a product recall, there's no single moment of failure — just a gradual accumulation of harm that's hard to trace back to a decision-maker.
These dynamics are exactly why ISO 42001:2023 was developed — and why it represents the most rigorous framework currently available for organizations trying to establish clear AI governance structures.
What ISO 42001:2023 Says About AI Accountability
ISO 42001:2023 is the world's first international standard specifically designed for AI management systems. It doesn't just address technical performance — it explicitly requires organizations to define roles, responsibilities, and authorities for AI governance.
Clause 5.3 requires top management to assign and communicate responsibilities and authorities for AI-relevant roles throughout the organization. This isn't optional language — it's a mandatory requirement for conformance.
Clause 6.1.2 requires organizations to assess AI-specific risks, including risks arising from intended use, foreseeable misuse, and systemic impacts on affected parties. Critically, this risk assessment must be owned — not just documented.
Clause 8.4 addresses AI system impact assessments and requires that organizations evaluate potential harms to individuals and society before deploying AI systems. Again, someone has to be accountable for conducting, reviewing, and acting on that assessment.
Annex A, Control A.6.1 specifically calls for the establishment of an AI policy that includes accountability structures — making explicit who is responsible for AI governance at the organizational level.
The standard doesn't prescribe a single organizational structure. But it requires that whatever structure you choose be documented, communicated, and actually functioning. In my experience with 200+ clients across industries, organizations that achieve ISO 42001 certification — and maintain a 100% first-time audit pass rate at Certify Consulting — succeed because they treat accountability as a structural design problem, not a policy checkbox.
The Four Accountability Layers Every AI-Using Organization Needs
Based on ISO 42001 requirements and practical implementation experience, I recommend structuring AI accountability across four distinct layers.
Layer 1: Executive Sponsorship (The "AI Owner")
Someone in the C-suite must own AI risk at the enterprise level. In larger organizations, this is increasingly formalized as a Chief AI Officer (CAIO) or Chief Responsible AI Officer role. In smaller organizations, it typically falls to the CTO, CRO, or CEO directly.
This person is accountable for: - Approving the organizational AI policy (ISO 42001 clause 5.2) - Ensuring adequate resources for AI governance (clause 5.1) - Receiving escalated AI risk reports - Being the named accountable party in regulatory filings
This is not a figurehead role. Under the EU AI Act, operators of high-risk AI systems must designate a responsible individual. Under emerging U.S. state laws (Colorado SB 205, which took effect in 2026, is the most comprehensive example), algorithmic decision-making systems used in consequential decisions require documented human oversight.
Layer 2: The AI Governance Committee
No single executive can manage AI risk across every business function. ISO 42001's clause 5.3 contemplates a cross-functional governance structure, and best practice consistently points to a formal AI Governance Committee (or AI Ethics Board) that includes:
- Legal/Compliance (regulatory exposure, liability)
- IT/Security (infrastructure risk, data integrity)
- Data Science/ML Engineering (technical risk, model performance)
- HR (workforce impact, hiring/performance AI use)
- Business Unit Representatives (operational context)
- Privacy/Data Protection Officer (where applicable under GDPR or state privacy laws)
This committee should meet at a defined cadence — monthly for active AI deployments, quarterly at minimum — and should maintain documented minutes that demonstrate active governance. Auditors will ask for this evidence.
Layer 3: The AI System Owner
For each individual AI system or application deployed, there should be a designated AI System Owner — typically a business unit leader or product manager who is accountable for that system's performance, compliance, and impact.
The AI System Owner is responsible for: - Maintaining the system's risk assessment under clause 6.1.2 - Ensuring the impact assessment under clause 8.4 is current - Monitoring ongoing system performance and drift - Escalating anomalies to the AI Governance Committee - Managing the system's vendor relationships if third-party AI is involved
This is the role most organizations are missing. They have general AI policies but no one specifically accountable for the behavior of their accounts-payable automation tool or their customer churn prediction model.
Layer 4: End Users and Operators
ISO 42001 and the EU AI Act both recognize that the humans operating AI systems bear responsibility for appropriate use. Your AI governance framework must include:
- Documented acceptable use policies for AI tools
- Training requirements for employees who use AI in decision-making
- Escalation procedures when AI output seems wrong or unexpected
- Prohibition on using AI outputs without appropriate human review in high-stakes decisions
AI Accountability Across Common Business Functions
| Business Function | Typical AI Use Case | Primary Accountability Owner | Key Risk Category |
|---|---|---|---|
| Human Resources | Resume screening, performance scoring | CHRO + AI System Owner | Discrimination, EEOC exposure |
| Finance | Credit decisioning, fraud detection | CFO + Compliance | Fair lending, CFPB exposure |
| Customer Service | Chatbots, sentiment analysis | CCO + IT | Consumer protection, FTC exposure |
| Marketing | Personalization, ad targeting | CMO + Privacy Officer | Data privacy, FTC/GDPR exposure |
| Healthcare | Diagnostic assistance, triage | CMO/Medical Director | Patient safety, FDA oversight |
| Legal/Contracts | Contract review, e-discovery | General Counsel | Unauthorized practice, privilege waiver |
| Supply Chain | Demand forecasting, vendor selection | COO + Procurement | Operational risk, vendor liability |
| Cybersecurity | Threat detection, anomaly detection | CISO | Security incident liability |
Third-Party AI: The Accountability Gap Most Companies Ignore
Here's a scenario I see constantly: a company deploys a SaaS platform that includes AI features — a CRM that scores leads, an HR platform that flags performance issues, a financial tool that generates forecasts. The vendor built the AI. The company just uses it.
Who's responsible when that AI discriminates against a job applicant or gives a fatally wrong financial forecast?
Under ISO 42001 clause 8.5, organizations are responsible for AI systems they deploy regardless of whether those systems were developed internally or procured from third parties. The EU AI Act reaches the same conclusion — operators (the companies using AI) bear legal accountability for the AI they deploy, even if a vendor built it.
This means your vendor contracts need to address: - What data the AI model was trained on and how bias was assessed - What performance metrics the vendor guarantees - What audit rights you have over the model - Who bears liability for AI-caused harm - How the vendor will notify you of material model changes
A 2023 McKinsey survey found that only 21% of organizations with formal AI risk programs had extended that governance to their third-party AI vendors. That is a significant exposure that ISO 42001-aligned organizations close systematically.
When AI Goes Wrong: The Incident Response Accountability Chain
Even well-governed AI systems fail. What separates organizations that manage AI incidents well from those that face regulatory action and reputational damage is having a pre-defined accountability chain for AI incidents.
ISO 42001 clause 10.2 requires documented nonconformity and corrective action processes. Applied to AI incidents, this means:
Step 1 — Detection and Triage (AI System Owner) The AI System Owner or end users identify anomalous AI behavior or a harmful output. Initial triage determines whether this is a minor performance issue or a potential harm event.
Step 2 — Escalation (AI Governance Committee) Harm events — any output that has or could cause harm to individuals, the organization, or third parties — are escalated to the AI Governance Committee within a defined timeframe (24-48 hours is standard).
Step 3 — Containment Decision (Executive Sponsor) The executive AI owner, informed by the governance committee, decides whether to suspend, modify, or continue operating the AI system pending investigation.
Step 4 — Root Cause Analysis and Remediation (Cross-functional) A formal root cause analysis is documented — this is required evidence under ISO 42001 and is likely to be requested by regulators. Remediation actions are assigned with owners and deadlines.
Step 5 — Regulatory Notification (Legal/Compliance) Legal assesses whether the incident triggers any mandatory notification obligations under applicable law. This is function-specific (HIPAA for health data, FCRA for credit decisions, GDPR for EU data subjects, etc.).
Step 6 — Post-Incident Review (AI Governance Committee) Within 30 days, the committee reviews the incident, assesses whether the AI policy or risk assessment needs updating, and documents lessons learned.
Building the Business Case for Formal AI Accountability
If you're trying to convince leadership to invest in formal AI governance structures, here are the numbers that cut through:
- The EU AI Act imposes fines up to €30 million or 6% of global annual turnover for prohibited AI practice violations — and up to €15 million or 3% of turnover for other violations by operators.
- EEOC Chair Charlotte Burrows stated in 2023 that algorithmic discrimination is a top enforcement priority, and the EEOC has brought multiple cases involving AI hiring tools since 2021.
- IBM's 2024 AI and Automation report found that organizations with mature AI governance programs report 25% higher AI ROI than those without formal governance, because they deploy with more confidence and face fewer costly incidents.
- A 2023 Stanford HAI report found that AI incidents increased 26x over the previous decade, with the majority attributable to commercially deployed AI systems rather than research contexts.
Formal AI accountability isn't just risk management. It's a competitive differentiator — because it enables faster, more confident AI deployment.
The ISO 42001 Certification Path: Making Accountability Auditable
The most powerful thing ISO 42001 certification does is make your accountability structures verifiable. Any organization can claim to have AI governance. Certified organizations can prove it.
Certification requires demonstrating to an accredited third-party auditor that: - Accountability roles are documented and communicated (clause 5.3) - Risk assessments are conducted and owned (clause 6.1.2) - AI impact assessments exist and are current (clause 8.4) - Incident response processes are defined and practiced (clause 10.2) - Top management is actively engaged in AI governance (clause 5.1)
At Certify Consulting, we guide organizations through this process with a structured implementation methodology that consistently achieves first-time certification. The typical timeline from engagement to certification is 12-18 months for most organizations, though we have completed expedited implementations in as few as 6 months for organizations with mature quality management systems.
For organizations starting their ISO 42001 journey, our ISO 42001 implementation guide provides a detailed roadmap. If you're assessing your current state, our AI governance gap assessment can identify the specific accountability gaps in your current structure.
Practical First Steps: You Can Start Today
You don't need to wait for a full certification program to begin closing accountability gaps. Here's what any organization can do in the next 30 days:
- Name an executive AI sponsor — even informally. Someone needs to be the escalation point for AI risk right now.
- Create an AI system inventory — document every AI system your organization uses, including third-party platforms with AI features. You cannot govern what you haven't catalogued.
- Conduct a quick accountability audit — for each system in your inventory, ask: Who is responsible if this produces a harmful output today? If the answer is unclear or disputed, you have a gap.
- Review your top three AI vendor contracts — check for AI-specific liability clauses, audit rights, and model change notifications. Most contracts written before 2023 have none of these provisions.
- Draft a one-page AI incident escalation procedure — even a simple flowchart is better than no documented process. Who calls whom, and when?
These five steps won't give you ISO 42001 certification, but they'll meaningfully reduce your exposure and give you a foundation for a formal governance program.
FAQ: AI Accountability and Organizational Responsibility
Q: Who is legally responsible when an AI system causes harm — the vendor or the company using it?
Generally, the organization deploying and operating the AI system bears primary accountability to affected parties and regulators, even if a vendor developed the underlying model. Under the EU AI Act, operators — the companies using AI — are explicitly responsible for ensuring systems are used appropriately and do not cause harm. Vendor contracts may shift some liability internally between the parties, but they do not protect a deploying organization from regulatory action or third-party claims.
Q: Does every company that uses AI need an ISO 42001 certification?
No — ISO 42001 certification is voluntary. However, certification provides the most defensible evidence that your organization has systematic AI governance in place. For organizations in regulated industries, those using high-risk AI (as defined by the EU AI Act), or those with enterprise AI deployments, certification provides significant legal and competitive advantages. Smaller organizations or those with limited AI use may achieve sufficient governance through a structured implementation of ISO 42001's requirements without pursuing formal certification.
Q: What's the difference between AI accountability and AI ethics?
AI ethics is the set of principles and values that guide AI development and use — fairness, transparency, human dignity. AI accountability is the operational and legal question of who specifically is responsible for ensuring those principles are upheld and who bears consequences when they are not. Ethics without accountability is aspiration. ISO 42001 is specifically designed to operationalize accountability, not just codify ethics.
Q: Can a single person serve as both the executive AI sponsor and the AI governance committee chair?
In small organizations, yes — ISO 42001 does not mandate specific organizational structures, only that responsibilities are clearly defined and documented. However, there is inherent risk in concentrating AI oversight in one person: it creates a single point of failure, may limit cross-functional perspective, and can create conflicts of interest if that person also has operational responsibility for AI deployments being governed. Best practice separates the executive sponsor role from day-to-day governance committee leadership.
Q: What happens to AI accountability when we use generative AI tools like ChatGPT or Microsoft Copilot?
Generative AI tools deployed in a business context are subject to the same accountability requirements as any other AI system. Your organization is accountable for how employees use these tools, what data they input, and how outputs are applied in decision-making. This requires acceptable use policies, training, and — for high-stakes use cases — human review requirements. ISO 42001 clause 8.5 applies to all AI systems in scope, including commercial generative AI platforms.
The Bottom Line
When AI goes wrong in your organization, accountability will be assigned — either by you, proactively and systematically, or by a regulator, retroactively and expensively. ISO 42001:2023 provides the internationally recognized framework for making that accountability assignment deliberate, documented, and defensible.
The organizations that will navigate the coming decade of AI regulation successfully are not necessarily those with the most sophisticated AI — they're the ones that know exactly who is responsible for every AI system they deploy, and can prove it.
If you're ready to build that kind of accountability structure, Certify Consulting has the expertise, methodology, and track record to get you there.
Last updated: 2026-03-04
Jared Clark is the principal consultant at Certify Consulting and lead author at iso42001consultant.com. With credentials spanning law, business administration, project management, quality engineering, regulatory affairs, and pharmaceutical compliance (JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC), he has guided 200+ organizations through management system certification with a 100% first-time audit pass rate.
Jared Clark
Certification Consultant
Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.