AI Governance & Risk Management 13 min read

Shadow AI: What's Running in Your Company Without Oversight

J

Jared Clark

March 07, 2026

If you think you know every AI tool running inside your organization, you're almost certainly wrong. Shadow AI — the use of artificial intelligence tools, applications, and services without official IT approval or organizational oversight — has become one of the most significant and underappreciated governance risks facing businesses today. And unlike shadow IT of the past (think unauthorized Dropbox accounts), the stakes with AI are categorically higher.

I've spent the last eight-plus years helping organizations build management systems that actually work. In that time, and across more than 200 clients, I've watched the shadow AI problem grow from a niche concern into a boardroom-level crisis. This guide will walk you through exactly what shadow AI is, what tools your employees are likely using right now without your knowledge, and — critically — how ISO 42001:2023 gives you the framework to bring it under control.


What Is Shadow AI?

Shadow AI refers to any artificial intelligence system, tool, plugin, or AI-enabled feature that employees use in the course of their work without formal organizational approval, procurement review, or IT security assessment. It's the AI equivalent of shadow IT, but with compounding risks because AI systems can generate, transform, and exfiltrate sensitive data in ways that traditional unauthorized software cannot.

The defining characteristic of shadow AI isn't malicious intent — in the vast majority of cases, employees are simply trying to work faster and smarter. The problem is that every unapproved AI tool represents an unassessed risk surface that sits completely outside your governance perimeter.

Why Shadow AI Is Different from Shadow IT

Traditional shadow IT (unauthorized SaaS tools, personal cloud storage) carried risks of data leakage and compliance gaps. Shadow AI multiplies those risks in several ways:

  • Training data exposure: Many consumer AI tools use input data to retrain their models. Confidential business data entered into an unapproved tool may literally become part of a third-party AI's training dataset.
  • Output reliability: AI-generated content presented as fact — without oversight — creates liability if that output is wrong.
  • Regulatory accountability: Under emerging AI regulations, organizations are responsible for AI outputs even if the underlying tool was never officially sanctioned.
  • Bias propagation: Unvetted models may carry embedded biases that violate your organization's equity commitments or applicable law.

The Scale of the Problem: Shadow AI by the Numbers

The data on shadow AI adoption is striking, and organizations that dismiss this as a fringe issue are ignoring the evidence:

  • A 2024 Salesforce survey found that 55% of employees who use AI at work are using unapproved tools, with the majority reporting they had never received any formal AI policy guidance from their employer.
  • According to a 2023 Microsoft study, 78% of AI users bring their own AI tools to work, bypassing organizational procurement entirely.
  • IBM's 2024 AI in Action report found that only 24% of AI projects are being properly reviewed, tested, and governed, suggesting the vast majority of organizational AI use — including shadow deployments — goes without meaningful oversight.
  • Gartner projects that by 2027, more than 40% of enterprise data leaks will involve generative AI tools that were not officially sanctioned by IT or security teams.
  • The EU AI Act, which began phased enforcement in 2024, holds organizations accountable for AI systems deployed in high-risk categories regardless of whether those systems were formally procured — meaning your employees' shadow AI use could trigger regulatory liability you didn't know you had.

What Tools Are Employees Actually Using?

This is the question most organizations struggle to answer, and the honest answer is: more than you think. Here's a practical taxonomy of shadow AI by category:

Category 1: Generative AI Chatbots and Assistants

The most visible layer of shadow AI. Employees are using ChatGPT (free and Plus tiers), Google Gemini, Claude, Perplexity AI, and Microsoft Copilot outside of any enterprise licensing agreement. These tools are used for drafting emails, summarizing reports, generating code, creating presentations, and answering business questions — often with company-confidential information pasted directly into the prompt.

The risk: Depending on the account tier and terms of service, those prompts — and the confidential data within them — may be used for model training or stored on servers outside your data jurisdiction.

Category 2: AI-Embedded Productivity Tools

This is the shadow AI most organizations miss entirely. Many tools your employees already have access to have quietly shipped AI features that were never part of your original procurement scope:

  • Grammarly's AI writing features can receive full document content
  • Notion AI, embedded in a workspace tool many teams self-provision
  • Canva AI and image generation features
  • Zoom AI Companion summarizing meeting transcripts, including sensitive discussions
  • Slack's AI features surfacing conversation summaries
  • Adobe Firefly generating images from text prompts in existing Creative Cloud subscriptions

These aren't tools employees are sneaking in — they're features that appeared inside approved tools, and most IT departments haven't updated their data processing agreements or usage policies to account for them.

Category 3: Browser Extensions and Plugins

Perhaps the most insidious category because it's hardest to detect. AI browser extensions can intercept and process web content, forms, and even clipboard data. Common examples include:

  • AI writing assistants installed as Chrome extensions
  • AI-powered email reply generators integrated into Gmail or Outlook
  • AI summarizers that process every webpage a user visits
  • Code completion tools like GitHub Copilot used outside enterprise licensing

Category 4: Departmental and Vertical AI Tools

Individual departments often procure their own AI tools without engaging IT or legal:

  • Sales teams using AI-powered CRM enrichment tools
  • HR departments using AI resume screening tools that may carry substantial bias and legal risk
  • Marketing teams using AI content generation platforms
  • Finance teams using AI-assisted forecasting tools with access to sensitive financial data
  • Legal teams using AI contract review platforms without bar association guidance review

Category 5: Developer and Technical Shadow AI

Engineering and data science teams often have the highest autonomy and lowest oversight when it comes to AI adoption:

  • Open-source LLMs self-hosted on company infrastructure without security review
  • AI APIs integrated directly into products without architecture review
  • AI-assisted code generation tools with permissive output licensing terms
  • Automated ML pipelines built without model documentation or bias testing

The ISO 42001 Framework for Shadow AI Governance

This is where ISO 42001:2023 — the international standard for AI management systems — provides genuine, actionable structure. The standard doesn't just tell you that AI governance matters; it tells you how to build systems that catch shadow AI before it becomes a liability.

Clause 6.1: Risk Assessment and the Shadow AI Inventory

ISO 42001:2023 clause 6.1 requires organizations to identify risks associated with AI systems within their scope. You cannot assess what you cannot see. A shadow AI risk program must begin with a comprehensive AI inventory — and that inventory must explicitly account for unsanctioned tools.

Practical tactics I've used with clients to surface shadow AI:

  • Anonymous employee surveys asking what AI tools they use weekly (the anonymity is critical for honest responses)
  • Network traffic analysis for known AI endpoint domains
  • Procurement card review for subscriptions to AI platforms
  • Browser extension audits on managed devices
  • Vendor contract reviews for recently updated terms that introduce AI features

Clause 6.1.2: Identifying AI System Impacts

For each identified tool — sanctioned or shadow — ISO 42001 clause 6.1.2 requires assessment of potential impacts on individuals, groups, and society. Shadow AI tools are particularly difficult to assess because you often lack visibility into the underlying model, its training data, or its intended use case.

Clause 7.3: Awareness and the Responsible Use Gap

A major driver of shadow AI is the gap between employee capability and organizational policy. Employees know how to use AI tools — they learned on their own. They often don't know what's permitted because their organization hasn't told them. ISO 42001 clause 7.3 requires that persons doing work affecting AI systems be aware of their role in the management system.

This is where policy and training intersect. An acceptable use policy for AI — one that employees actually read and understand — is a non-negotiable control.

Clause 8.4: AI System Lifecycle and Third-Party Controls

ISO 42001 clause 8.4 addresses the acquisition and development of AI systems, including third-party procured tools. This clause provides the framework for a vendor AI assessment process — the mechanism by which IT, legal, and security evaluate a tool before employees are permitted to use it.

Shadow AI, by definition, bypasses clause 8.4 controls entirely. Building an effective shadow AI program means making the compliant pathway easier than the shadow pathway — approved tools readily available, procurement turnaround fast, and policies clear.


Comparing Shadow AI Risk Levels by Tool Type

Tool Category Data Exposure Risk Regulatory Risk Bias Risk Detection Difficulty
Consumer Chatbots (ChatGPT free) High High Medium Low
AI-Embedded SaaS Features Medium Medium Low–Medium High
Browser Extensions Very High High Low Very High
Departmental AI Tools (HR/Sales) High Very High High Medium
Developer/Technical AI Medium–High High High Medium
Personal AI Subscriptions (mobile) High Medium Low Very High

Risk levels reflect general assessment; individual tools vary. Table reflects ISO 42001-aligned risk dimensions.


Building a Shadow AI Governance Program: A Practical Roadmap

Step 1: Discover — Run Your Shadow AI Inventory

You cannot govern what you cannot see. Start with a 30-day discovery sprint using the tactics above. The goal is not to punish employees but to understand the actual landscape. In my experience, most organizations discover two to three times more AI tool usage than they expected.

Step 2: Classify — Apply a Risk Tier to Each Discovered Tool

Not all shadow AI is equally dangerous. A grammar checker with AI features poses different risks than an AI tool processing HR records. Classify each discovered tool against your risk criteria (data type processed, regulatory applicability, output use).

Step 3: Respond — Approve, Restrict, or Prohibit

For each discovered tool, make a formal disposition:

  • Approve and document: Run it through your full vendor AI assessment and add it to your AI inventory
  • Conditional approval: Permit with controls (e.g., no confidential data input)
  • Restrict: Limit to specific roles or use cases
  • Prohibit: Block access and communicate the rationale clearly

Step 4: Enable — Build the Compliant Pathway

The single most effective shadow AI control is making approved AI tools excellent. Employees turn to shadow AI when the official options are slow, limited, or unavailable. Enterprise licenses for leading AI platforms, prompt libraries, and fast procurement processes dramatically reduce shadow AI incentives.

Step 5: Monitor — Continuous Detection

Shadow AI is not a one-time problem. New tools emerge constantly, and employee behavior evolves. Build continuous monitoring into your AI management system: quarterly surveys, ongoing network monitoring, and a clear process for employees to request new tool evaluations without bureaucratic penalty.


Beyond operational risk, shadow AI creates specific legal exposure that organizations need to understand:

Intellectual property: Several AI tools generate outputs with contested copyright status. If an employee uses an unapproved AI tool to generate content your organization then publishes or sells, the IP ownership of that content may be legally ambiguous.

Data protection: Under GDPR, CCPA, and similar frameworks, inputting personal data into an AI tool constitutes data processing. Doing so without a valid data processing agreement with the tool provider is a regulatory violation — regardless of whether IT approved the tool.

Employment law: AI tools used in HR decisions (screening, performance evaluation, scheduling) trigger specific regulatory obligations in many jurisdictions. Shadow AI deployments in HR almost certainly violate these obligations.

Professional liability: For organizations in regulated industries (healthcare, financial services, legal), shadow AI use by employees may violate sector-specific professional conduct rules.

For a deeper dive into how ISO 42001 aligns with regulatory requirements in your sector, see our ISO 42001 compliance requirements overview on this site.


Citation Hooks

"Shadow AI refers to any AI system used by employees without organizational approval or IT security assessment, representing an uncontrolled risk surface that bypasses every governance control an organization has built."

"ISO 42001:2023 clause 8.4 requires organizations to assess AI systems acquired from third parties — a control that shadow AI tools, by definition, circumvent entirely, creating compliance gaps organizations may not discover until an incident occurs."

"The most effective shadow AI control is not restriction but enablement: organizations that provide employees with excellent, readily accessible approved AI tools see dramatically lower rates of unauthorized AI adoption."


How Certify Consulting Helps

At Certify Consulting, we've guided more than 200 organizations through AI governance challenges, including shadow AI discovery and remediation programs. Our approach combines ISO 42001:2023 implementation expertise with practical risk management — built for organizations that need real-world solutions, not theoretical frameworks.

We maintain a 100% first-time audit pass rate across all our ISO 42001 clients, and our shadow AI discovery engagements typically surface between two and five times more AI tool usage than clients expected at the outset.

If you're ready to understand what AI is actually running in your organization, our ISO 42001 gap assessment is the natural starting point — it includes shadow AI discovery as a core component.


Frequently Asked Questions

Q: How do I find out what AI tools employees are using without my approval? A: Start with anonymous employee surveys — anonymity is critical for honest responses. Layer in network traffic analysis for known AI endpoints, procurement card reviews for AI subscriptions, and browser extension audits on managed devices. Most organizations find significantly more shadow AI usage than expected.

Q: Does ISO 42001 specifically address shadow AI? A: ISO 42001:2023 doesn't use the term "shadow AI," but several clauses directly govern it. Clause 6.1 requires comprehensive AI risk identification (which must include unsanctioned tools), clause 7.3 mandates AI awareness programs that reduce shadow adoption, and clause 8.4 requires third-party AI system controls — the very controls shadow AI bypasses.

Q: Is an employee's use of ChatGPT a compliance violation? A: It depends on what data they input and your applicable regulatory framework. If an employee inputs personal data covered by GDPR or CCPA into a free ChatGPT account without an organizational data processing agreement, that is almost certainly a data protection violation regardless of whether the employee meant any harm. The organizational liability rests with the employer.

Q: What's the difference between shadow AI and bring-your-own AI (BYOAI)? A: Shadow AI is unauthorized and unknown to the organization. BYOAI is a deliberate organizational policy that permits employees to use personally selected AI tools within defined guardrails. A BYOAI program is a legitimate governance response to shadow AI — it acknowledges the reality of employee AI adoption and channels it through policy rather than ignoring it.

Q: How long does it take to build a shadow AI governance program? A: A basic shadow AI discovery and initial policy framework can be completed in 60–90 days for most organizations. A full ISO 42001-aligned AI management system that systematically prevents shadow AI adoption, with ongoing monitoring and vendor assessment processes, typically takes 6–12 months to implement properly. The discovery phase almost always pays for itself in identified risk reduction.


Last updated: 2026-03-05

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.

200+ Clients Served · 100% First-Time Audit Pass Rate

Ready to Lead in Responsible AI?

Schedule a free 30-minute consultation to discuss your organization's AI governance needs and ISO 42001 readiness. No pressure, no obligation — just expert guidance.

Or email [email protected]