Shadow AI is the single fastest-growing security risk most organisations are not equipped to handle. It refers to any use of artificial intelligence tools within an organisation that happens without formal approval, oversight, or governance from IT and security teams. That includes the marketing analyst using ChatGPT to summarise customer research, the developer debugging code with an unapproved coding assistant, and the finance team uploading spreadsheets to a generative AI tool to build forecasts.

If you are a CISO, CTO, or IT leader who has recently discovered that employees across your organisation are using AI tools with company data, this guide is for you. It covers what shadow AI is, why it is happening at scale, the concrete risks it creates, how to detect it, and how to build governance infrastructure that manages the problem without killing productivity.

The short answer to "what should I do about shadow AI?" is this: you cannot ban it, you should not ignore it, and you need infrastructure that makes approved AI usage easier than the unauthorised alternative.


Why Shadow AI Is Happening Now

Shadow AI is not a hypothetical future risk. It is a present reality driven by three converging forces.

The capability gap. Large language models like Claude and ChatGPT are genuinely useful for knowledge work. Employees who use them can draft documents faster, debug code more efficiently, analyse data more quickly, and automate repetitive tasks. When organisations do not provide sanctioned AI tools, employees find their own.

The access gap. Consumer AI tools require nothing more than an email address and a browser. There is no procurement process, no IT ticket, no security review. An employee can sign up for ChatGPT during a lunch break and start using it with company data by the afternoon.

The policy gap. Most organisations have not published clear AI acceptable use policies. A Salesforce survey of over 14,000 workers across 14 countries found that more than half of generative AI adopters at work use unapproved tools, with many recognising the need for company-approved programmes but proceeding without them because none exist (Salesforce, 2024).

These three gaps create a predictable outcome: widespread, ungoverned AI adoption that IT and security teams cannot see, measure, or control.


The Scale of the Problem

The numbers are consistent across multiple research sources, and they are large.

Gartner predicts that by 2030, more than 40% of enterprises will experience security or compliance incidents linked directly to shadow AI (Gartner, 2025). That is not a worst-case scenario. That is the baseline expectation from the industry's most conservative analyst firm.

IBM's 2025 Cost of a Data Breach Report found that shadow AI incidents now account for 20% of all data breaches, with an average cost premium of $670,000 above standard breaches. The average shadow AI breach costs $4.63 million compared to $3.96 million for breaches without an AI component (IBM, 2025).

On the adoption side, the data is equally stark:

  • 69% of organisations suspect or have evidence that employees are using prohibited generative AI tools
  • 71% of UK employees admitted to using unapproved AI tools at work, with 51% doing so at least once a week
  • Only 30% of organisations have full visibility into employee AI usage
  • 83% of organisations lack even basic controls to prevent data exposure through AI tools

These are not projections. These are current measurements of a problem that is already inside most organisations.


The Five Concrete Risks of Shadow AI

Fear-mongering about AI risk is easy and unhelpful. What follows are the specific, documented risk categories that shadow AI creates, each with real-world evidence.

1. Data Exfiltration

When an employee pastes customer records, financial data, or internal documents into a consumer AI chatbot, that data leaves the organisation's control boundary. It is transmitted to a third-party provider, processed on infrastructure the organisation does not control, and in some cases retained for model training.

Research from Cyberhaven found that over 30% of employees regularly input company data into public AI tools. This is not occasional misuse. It is a systematic data exfiltration channel that most Data Loss Prevention (DLP) tools were not designed to detect.

The Samsung incident in 2023 remains the most widely cited example. Samsung semiconductor engineers used ChatGPT to debug source code, optimise test sequences, and summarise meeting notes. Within 20 days, three separate incidents resulted in proprietary source code, internal meeting transcripts, and chip testing data being uploaded to OpenAI's servers (TechCrunch, 2023). Samsung subsequently banned all generative AI tools on company devices.

2. Compliance Violations

Shadow AI creates compliance exposure that is difficult to quantify because it is difficult to see. When employees send data to AI tools hosted in unknown jurisdictions, they can trigger violations of GDPR, HIPAA, SOC 2, and other regulatory frameworks without anyone in the organisation knowing it occurred.

Under GDPR, sending EU customer personal data to an AI service without appropriate data processing agreements can result in fines of up to 4% of global annual revenue. Under HIPAA, sharing protected health information with an unapproved AI tool is a reportable breach regardless of whether the data is actually compromised.

Research indicates that 76% of shadow AI tools fail to meet SOC 2 compliance standards, and 52% of organisations say shadow AI complicates their regulatory compliance efforts.

3. Intellectual Property Leakage

Code, product designs, strategic documents, and trade secrets entered into consumer AI tools may be used to train future model versions. Even when providers offer opt-out mechanisms, the organisation has no way to verify compliance because it has no visibility into the usage in the first place.

This risk is particularly acute for software companies. Developers are among the heaviest users of AI tools, and the data they work with (source code, architecture documents, database schemas) is among the most sensitive. A developer pasting proprietary algorithms into an AI assistant to get debugging help is simultaneously creating an IP exposure that the legal team cannot assess because they do not know it happened.

4. Loss of Audit Trail

Every interaction with an unauthorised AI tool is an interaction that does not appear in the organisation's security logs, compliance records, or audit trail. When a regulator asks "what data has been shared with AI systems and what controls were in place?", the honest answer for most organisations is "we do not know."

This audit gap compounds every other risk. You cannot assess exposure you cannot measure. You cannot remediate incidents you cannot detect. You cannot demonstrate compliance with controls you never implemented.

5. Output Risk and Liability

Shadow AI usage means AI-generated outputs enter business processes without quality controls, accuracy verification, or liability frameworks. An employee using an unauthorised AI tool to draft a contract clause, generate a financial projection, or produce a customer communication creates potential liability that the organisation may not discover until the damage is done.

AI-generated content can contain hallucinated facts, fabricated citations, biased recommendations, and legal inaccuracies. When this content enters official business outputs through shadow AI channels, there is no review process to catch these errors.


How to Detect Shadow AI

Detection is the prerequisite for governance. You cannot create policy for usage you cannot see. Here are the practical detection methods, ordered from simplest to most thorough.

Network Traffic Analysis

The most straightforward detection method is monitoring DNS queries and network traffic for connections to known AI service domains. This includes:

  • api.openai.com, chat.openai.com (OpenAI/ChatGPT)
  • api.anthropic.com, claude.ai (Anthropic/Claude)
  • gemini.google.com, generativelanguage.googleapis.com (Google Gemini)
  • copilot.microsoft.com (Microsoft Copilot)
  • api.together.xyz, api.fireworks.ai, api.groq.com (inference providers)

This approach has obvious limitations. It does not distinguish between sanctioned and unsanctioned usage, it misses AI tools accessed through VPNs or personal devices, and it cannot tell you what data was shared. But it gives you a baseline measurement of AI tool usage across your network.

Cloud Access Security Broker (CASB) Monitoring

A CASB provides deeper visibility into SaaS application usage than network monitoring alone. Modern CASB solutions can identify AI-related traffic patterns, classify AI applications by risk level, and provide real-time coaching to users who access unauthorised tools.

CASB monitoring is particularly effective for detecting browser-based AI usage, which accounts for the majority of shadow AI activity. It can identify not just that an employee accessed an AI tool, but how frequently, for how long, and with what data classification context.

Endpoint Monitoring

Endpoint Detection and Response (EDR) tools and endpoint monitoring agents can detect AI browser extensions, desktop AI applications, and local model deployments that network monitoring misses entirely. This includes:

  • Browser extensions for AI writing assistants, code completers, and summarisation tools
  • Desktop applications like local LLM interfaces
  • IDE plugins and coding assistant integrations
  • API calls from development environments to AI services

DLP Integration

Traditional Data Loss Prevention tools inspect file transfers but were not designed for the AI use case. The challenge is that shadow AI data exfiltration happens through prompt submissions and API-based inference operations, not file uploads.

AI-aware DLP tools extend coverage to detect sensitive data in AI prompts, clipboard operations that precede AI tool usage, and API request bodies. GenAI-related DLP incidents increased more than 2.5x in the past year, now comprising 14% of all DLP incidents.

Behavioural Analytics

The most sophisticated detection approach combines all of the above with behavioural analytics that identify patterns suggesting unauthorised AI usage:

  • Volume-based alerts for users whose data transfer patterns to AI services exceed established baselines
  • Temporal analysis to detect AI usage during off-hours or outside normal workflows
  • Cross-correlation between AI tool access and subsequent productivity patterns
  • Anomaly detection for unusual data classification access preceding AI tool usage

Shadow AI Detection Checklist

Use this checklist to assess your organisation's current detection capabilities.

Network Layer:

  • DNS monitoring configured for known AI service domains
  • Network traffic analysis identifies AI API endpoints
  • SSL/TLS inspection covers AI service connections (where legally permitted)
  • Firewall rules log (not just block) AI service connections

Application Layer:

  • CASB deployed with AI application discovery enabled
  • Browser extension audit completed across managed devices
  • SaaS discovery tool identifies AI applications in use
  • OAuth token audit identifies AI service integrations

Endpoint Layer:

  • EDR covers AI desktop applications and browser extensions
  • Clipboard monitoring for sensitive data preceding AI tool usage
  • IDE plugin inventory across development environments
  • Mobile device management covers AI applications

Data Layer:

  • DLP policies updated for AI prompt submission patterns
  • Data classification labels enforced across AI tool interactions
  • API request body inspection for AI inference calls
  • Sensitive data pattern matching in outbound AI traffic

Behavioural Layer:

  • Baseline AI usage patterns established per role
  • Anomaly detection configured for unusual AI interaction volumes
  • Cross-correlation between data access and AI tool usage
  • Regular shadow AI usage reports generated for security review

Building an AI Acceptable Use Policy

Detection tells you what is happening. Policy tells your organisation what should happen. An AI acceptable use policy is the foundational governance document that transforms shadow AI from an unmanaged risk into a managed programme.

The following framework covers the essential sections. Adapt it to your organisation's regulatory environment, risk tolerance, and operational requirements.

Policy Framework Outline

1. Purpose and Scope

State explicitly that the policy covers all use of AI tools, models, and services for work purposes, regardless of whether they are provided by the organisation or accessed through personal accounts. Define "AI tools" broadly enough to cover chatbots, coding assistants, image generators, data analysis tools, and embedded AI features within existing software.

2. Approved Tools and Platforms

Maintain a living register of approved AI tools, specifying:

  • Which tools are approved for general use
  • Which tools are approved for specific roles or functions
  • Which tools are approved with restrictions (e.g. "no customer data")
  • The approval process for requesting new AI tools

3. Data Classification Rules

This is the most critical section. Map your existing data classification scheme to AI usage permissions:

Data Classification AI Usage Permission Examples
Public Permitted with any approved tool Published marketing content, public documentation
Internal Permitted with approved enterprise tools only Internal processes, non-sensitive reports
Confidential Permitted only with self-hosted or contractually protected AI Customer data, financial records, HR data
Restricted Never permitted with AI tools Trade secrets, credentials, PII under regulatory protection

4. Prohibited Activities

Be specific about what is not permitted:

  • Uploading source code to consumer AI tools without security review
  • Sharing customer personal data with any AI service lacking a data processing agreement
  • Using AI-generated outputs in regulated communications without human review
  • Creating accounts on AI services using corporate email addresses without IT approval
  • Disabling or circumventing AI governance controls

5. Monitoring and Audit

Declare transparently that AI usage is monitored. Specify what is logged, how long logs are retained, who has access to monitoring data, and how monitoring data is used. Transparency about monitoring is both an ethical obligation and a legal requirement in many jurisdictions.

6. Incident Response

Define what constitutes a shadow AI incident, how to report one, and what the response process looks like. Include:

  • Self-reporting mechanisms (with appropriate safe harbour provisions)
  • Escalation paths for different severity levels
  • Remediation procedures for data exposure through AI tools
  • Post-incident review process

7. Training and Awareness

Only 23% of organisations currently require staff to be trained on approved AI usage. Your policy should mandate training that covers approved tools and their proper use, data classification rules specific to AI, how to recognise and report shadow AI, and the rationale behind the policy (not just the rules).

8. Review Cadence

AI capabilities change rapidly. Commit to reviewing and updating the policy on a defined schedule, no less than quarterly. Include a mechanism for employees to request policy changes and new tool approvals.


The Governance Maturity Model

Not every organisation can implement full AI governance overnight. The following maturity model provides a staged approach that delivers incremental risk reduction at each level.

Level 1: Visibility (Weeks 1 to 4)

Objective: Know what AI tools are being used and by whom.

Actions:

  • Deploy network monitoring for AI service domains
  • Conduct an employee survey on current AI tool usage
  • Audit browser extensions and SaaS applications for AI tools
  • Establish a baseline measurement of shadow AI prevalence

Outcome: A factual picture of your organisation's current AI usage, including tools, users, frequency, and data types.

Level 2: Policy (Weeks 4 to 8)

Objective: Establish clear rules for AI usage.

Actions:

  • Publish an AI acceptable use policy using the framework above
  • Define the approved AI tool register
  • Implement data classification rules for AI usage
  • Deliver initial training to all employees

Outcome: Every employee knows what AI tools they can use, what data they can share, and what the consequences of policy violations are.

Level 3: Controls (Weeks 8 to 16)

Objective: Implement technical controls that enforce policy.

Actions:

  • Deploy CASB with AI application controls
  • Implement DLP policies for AI prompt submissions
  • Configure endpoint monitoring for AI tools
  • Establish automated alerting for policy violations

Outcome: Technical guardrails that prevent the highest-risk shadow AI activities and alert security teams to policy violations.

Level 4: Infrastructure (Weeks 16 to 24)

Objective: Provide governed AI alternatives that are better than shadow AI.

Actions:

  • Deploy enterprise AI tools with built-in governance (audit trails, data classification, access controls)
  • Implement self-hosted or contractually protected AI for sensitive workloads
  • Build API-level governance for developer AI usage
  • Create role-based AI access policies

Outcome: Employees have access to AI tools that are more capable and more convenient than consumer alternatives, with governance built in rather than bolted on.

Level 5: Optimisation (Ongoing)

Objective: Continuously improve AI governance based on usage data and evolving risks.

Actions:

  • Analyse AI usage patterns to identify high-value use cases
  • Measure shadow AI reduction over time
  • Adjust policies based on incident data and employee feedback
  • Evaluate new AI tools and capabilities against governance requirements

Outcome: AI governance becomes a competitive advantage rather than a cost centre, enabling faster and safer AI adoption than competitors who are still fighting shadow AI.


Why Banning AI Does Not Work

Samsung's response to its ChatGPT data leak was to ban all generative AI tools across the organisation. It is a natural reaction, and it is the wrong one for most organisations.

Bans fail for three reasons.

They drive usage underground. Employees who find AI tools valuable do not stop using them when banned. They switch to personal devices, personal accounts, and mobile networks that the organisation cannot monitor. The shadow AI problem gets worse, not better, because you have lost even the partial visibility that network monitoring provided.

They create competitive disadvantage. Organisations that ban AI tools fall behind competitors that govern them. The productivity gains from AI are real and measurable. A blanket ban trades a manageable security risk for a guaranteed competitive disadvantage.

They erode trust. Employees who are told they cannot use tools that make them measurably more productive feel that the organisation is working against their interests. This erodes the trust that effective security cultures depend on.

The alternative to banning AI is governing it. That means providing approved tools, setting clear policies, implementing technical controls, and building infrastructure that makes governed AI usage the path of least resistance.


Detection Patterns for Common Shadow AI Scenarios

Beyond the general detection methods described above, here are specific patterns to monitor for the most common shadow AI scenarios.

Scenario: Developer Using Unauthorised Coding Assistants

Detection signals:

  • API calls to AI inference endpoints from development environments
  • Unusual outbound data volumes from developer workstations
  • IDE plugin installations that were not in the approved software catalogue
  • Git commit patterns that suggest AI-assisted code generation (large commits with consistent formatting)

Response: Offer an approved coding assistant with enterprise governance. Developers are the most likely population to circumvent blocks, so the approved alternative must be genuinely good.

Scenario: Business Users Uploading Documents to Consumer Chatbots

Detection signals:

  • Browser-based access to AI chatbot services during business hours
  • Clipboard operations involving sensitive document content preceding AI tool access
  • Increased access to confidential documents without corresponding internal sharing activity
  • New AI service accounts registered with corporate email addresses

Response: Deploy an enterprise chatbot with document upload capabilities and appropriate data handling. Provide training on data classification.

Scenario: Teams Using AI Features in Existing SaaS Tools

Detection signals:

  • Activation of AI features in SaaS applications that were not security-reviewed for AI
  • Increased data processing volumes in SaaS tools with newly enabled AI capabilities
  • SaaS vendor notifications about AI feature adoption

Response: Review AI features in all existing SaaS contracts. Update data processing agreements where necessary. Disable AI features that cannot be adequately governed until appropriate agreements are in place.

Scenario: Employees Using AI on Personal Devices

Detection signals:

  • Decreased use of corporate collaboration tools during normal working hours
  • Documents downloaded to personal devices (if detectable through DLP)
  • Self-reported usage during anonymous surveys
  • Social engineering indicators (employees discussing AI tool usage informally)

Response: This is the hardest scenario to detect and the strongest argument for making governed AI tools available on corporate devices. You cannot monitor personal devices, but you can make the corporate alternative superior.


Measuring Shadow AI Governance Effectiveness

Governance without measurement is security theatre. Track these metrics to determine whether your shadow AI programme is actually reducing risk.

Leading indicators:

  • Number of employees who have completed AI acceptable use training
  • Percentage of AI tool requests processed through the approved evaluation workflow
  • Number of approved AI tools available to employees
  • Employee satisfaction scores for approved AI tools

Lagging indicators:

  • Number of shadow AI incidents detected per month (should decrease over time)
  • Volume of sensitive data detected in outbound AI traffic (should decrease)
  • Percentage of AI usage occurring through governed channels (should increase)
  • Time to detect shadow AI incidents (should decrease)
  • Compliance audit findings related to AI usage (should decrease)

Operational metrics:

  • Mean time to evaluate and approve new AI tool requests
  • Percentage of shadow AI detection alerts that are false positives
  • Employee adoption rate of approved AI alternatives
  • Cost per governed AI interaction versus estimated cost of ungoverned usage

If shadow AI detection volumes are increasing despite your governance programme, it means either your detection is getting better (good) or your governance is not providing adequate alternatives (bad). Use employee survey data to distinguish between the two.


The Infrastructure Approach to Shadow AI

The governance maturity model described above culminates in infrastructure, and for good reason. Policy and monitoring are necessary but insufficient. They tell people what not to do and catch them when they do it anyway. Infrastructure solves the problem structurally by making governed AI usage the default.

What does AI governance infrastructure look like in practice?

Centralised AI access layer. A single point through which all AI interactions flow, providing consistent authentication, authorisation, data classification, audit logging, and policy enforcement regardless of which model or tool is being used.

Self-hosted inference for sensitive workloads. When data cannot leave the organisation's control boundary, governance infrastructure must include the option to run inference locally. This eliminates the data exfiltration risk entirely for the most sensitive use cases. For a deeper exploration of self-hosted approaches, see our guide on self-hosted AI governance.

API-level governance for developer workflows. Developers need AI assistance in their IDEs, terminals, and CI/CD pipelines. Governance infrastructure must operate at the API level, intercepting AI interactions in these environments to enforce data classification, credential scanning, and audit logging without breaking developer workflows. For more on preventing credential leaks specifically, see our guide on AI agent secret detection.

Audit trail and SIEM integration. Every AI interaction must produce an audit record that integrates with existing Security Information and Event Management (SIEM) systems. This closes the audit gap that shadow AI creates and provides the evidence trail that regulators expect.

Policy-as-code. AI acceptable use policies encoded as machine-enforceable rules rather than PDF documents that employees may or may not read. When policy is code, it is enforced consistently, updated centrally, and audited automatically.

The organisations that solve shadow AI most effectively are those that treat it as an infrastructure problem rather than a policy problem. Policy defines the rules. Infrastructure enforces them.

For a comparison of tools that implement this infrastructure approach, see our guide on AI governance tools compared.


What to Do This Week

If you have just discovered shadow AI in your organisation and need to take immediate action, here is a prioritised five-day plan.

Day 1: Measure. Deploy DNS monitoring for the AI service domains listed earlier in this guide. Run a basic query against your network logs for the past 30 days. You need to know the scale of the problem before you can address it.

Day 2: Classify. Identify the three to five most common shadow AI use cases in your organisation. For each, determine what data is being shared and what regulatory obligations apply to that data.

Day 3: Communicate. Send a clear, non-punitive communication to all employees acknowledging that AI tools are being used, that the organisation wants to support productive AI usage, and that a formal policy is being developed. Ask employees to pause sharing sensitive data with AI tools until the policy is published.

Day 4: Approve. Select at least one AI tool for immediate sanctioned use. The fastest way to reduce shadow AI is to provide a governed alternative. Choose a tool that covers the most common use cases identified on Day 2.

Day 5: Plan. Begin drafting your AI acceptable use policy using the framework in this guide. Set a publication date no more than 30 days out. Assign ownership for each section of the governance maturity model.

Shadow AI is not a problem that can be solved in a week. But you can move from "we do not know what is happening" to "we know what is happening and have a plan" in five days. That shift alone dramatically reduces your risk exposure.


Summary

Shadow AI is the inevitable result of powerful AI tools being freely available to employees while organisations have not yet built the governance infrastructure to manage them. The data is clear: the majority of knowledge workers are already using AI tools for work, and the majority of that usage is happening outside IT's visibility.

The path forward is not prohibition. It is infrastructure. Detect what is happening, publish clear policy, implement technical controls, and then build the governance infrastructure that makes approved AI usage easier, faster, and more capable than the shadow alternative.

The organisations that get this right will not just reduce their shadow AI risk. They will accelerate their AI adoption, because employees will trust that the governed tools are worth using, and security teams will trust that the usage is visible, auditable, and compliant.

Shadow AI is a governance problem. Solve it with governance infrastructure.