What is Shadow AI?

Shadow AI is the use of unsanctioned generative-AI services (ChatGPT, Claude, Gemini, Copilot, Perplexity, Mistral, Grok, GitHub Copilot, and similar) by e…

Definition

Shadow AI describes the use of generative AI services by employees outside their organisation's sanctioned tooling. The pattern emerged rapidly through 2023–2025 as ChatGPT, Anthropic Claude, Google Gemini, Microsoft Copilot, and others became free or low-cost consumer services. Staff who would never have emailed a CSV of customer records to an outside vendor now routinely paste similar data into a generative-AI chat to "summarise it" or "draft an email about it."

The risk profile differs from traditional shadow IT in three ways. First, the data-leakage vector is conversational rather than file-based — staff paste sensitive content into a chat thinking of it as a temporary working session, often unaware that the conversation is logged, used for training (depending on plan and settings), or discoverable through subsequent prompts. Second, the output can carry licence and copyright issues that the organisation has not assessed. Third, the speed of adoption and the breadth of services involved have outpaced most organisations' acceptable-use policies and DLP tooling.

For MSPs, shadow AI is now a frequent topic in client conversations — particularly in regulated sectors (healthcare, finance, defence, education) where unauthorised disclosure of client data through a generative-AI chat can trigger PIPEDA, HIPAA, or sectoral incident-response obligations.

Core components

  • Generative-AI chat services. ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Copilot (Microsoft), Perplexity, Mistral, Grok (xAI), and similar consumer-facing or developer-facing chat interfaces.
  • AI-enabled developer tools. GitHub Copilot, Cursor, JetBrains AI Assistant, Cody, Codeium, and similar tools that send code context to a model for completion or refactoring.
  • AI features in mainstream SaaS. Generative-AI features now embedded in Microsoft 365 Copilot, Google Workspace Gemini, Notion AI, Slack AI, and similar — sanctioned or unsanctioned depending on the licence.
  • Model-context-leakage risk. The risk that data pasted into a model is retained, used for training, or surfaced in a subsequent unrelated prompt — varies sharply by service, plan, and user setting.
  • Output-licensing risk. The risk that AI-generated content carries undisclosed licence implications (open-source code suggestions, copyrighted text fragments, watermarked images).
  • Sanctioned-alternative provisioning. The remediation pattern: provide a corporate-licensed AI service (ChatGPT Team / Enterprise, Claude Team, Microsoft Copilot for M365, Google Gemini Workspace) with appropriate data-handling guarantees, and migrate users to it.

Why it matters

For regulated industries, shadow AI represents an active disclosure risk. A healthcare professional pasting a patient note into a consumer ChatGPT account to "make it sound nicer" is a HIPAA / PIPEDA / BC HIA / Alberta HIA disclosure event. A securities firm employee asking an AI to summarise a confidential transaction is potentially a regulated-information disclosure.

For non-regulated organisations, shadow AI still has commercial implications. Customer lists, pricing strategies, and proprietary process documentation regularly find their way into consumer AI services. The data may not be retained — the user may have selected the right setting — but the organisation has no audit trail to prove it.

For MSPs, shadow-AI visibility has become a billable conversation. Clients want to know whether their staff are using AI services, which services, and how often — and they want sanctioned-alternative recommendations they can deploy quickly.

How Lavawall® helps with Shadow AI

Lavawall® reviews email metadata against a curated 1,130+ SaaS application catalog to surface SaaS usage with low false-positive rates. The catalog includes the major generative-AI services and the tools that integrate with them. The result: a list of AI applications genuinely in use, attributed to specific users.

For MSP and IT teams, the platform supports an authorisation workflow — sanctioning specific AI services by category, surfacing unauthorised alternatives that should be migrated to the sanctioned ones, and producing reports the client's compliance officer can use in their next review.

Because SaaS / shadow-AI discovery is bundled into the Lavawall® platform alongside patching, GRC, and breach detection, the data feeds directly into compliance evidence (CMMC 2.0 SC.L2-3.13.6, NIST CSF DE.CM-7, CIS Control 16, SOC 2 CC6.6, PIPEDA accountability principle) without a separate per-user CASB invoice.

Frequently asked

Is shadow AI just shadow IT?
It is a subset of shadow IT with a distinctive risk profile. The conversational data-paste pattern, the rapid adoption pace, the model-training implications, and the output-licensing concerns are all characteristic of generative-AI usage in particular.
Can I just block ChatGPT and Claude at the firewall?
You can, and some organisations do — but blocking alone tends to push users to mobile data, personal devices, or VPNs. The more sustainable pattern is visibility plus sanctioned-alternative provisioning: see who is using what, deploy a sanctioned corporate service with appropriate data-handling guarantees, and migrate users to it.
Does Microsoft Copilot for M365 solve the shadow-AI problem?
It can solve a portion — providing a sanctioned generative-AI service with M365-tenant data handling — but only if the organisation also has visibility into who is still using consumer services. Copilot deployment without shadow-AI discovery just adds another tool to the mix.
How is this different from a CASB?
A CASB enforces policy at the network or API layer — actively blocking or proxying SaaS traffic. Lavawall® shadow-AI discovery focuses on visibility and attribution. For most MSP clients, visibility plus a sanctioned-alternative conversation is sufficient. CASBs and Lavawall® can coexist for clients that need active enforcement on top of visibility.