By Olivia Dubois
·
March 23, 2026
Shadow AI — the use of artificial intelligence tools by employees without IT department approval or oversight — has become the number one technology risk for European businesses in 2026. This comprehensive guide covers the phenomenon, its implications, and strategies to manage it.
Shadow AI refers to all AI tools and services used within an organization without the approval, oversight, or even knowledge of IT leadership. It is an extension of Shadow IT — the well-known phenomenon of unauthorized SaaS applications — but with specific and amplified risks.
| Criterion | Shadow IT | Shadow AI |
|---|---|---|
| Nature | SaaS applications (storage, collaboration, project management) | Generative AI tools, assistants, autonomous agents |
| Examples | Trello, Notion, WeTransfer, unauthorized Slack | ChatGPT, Claude, Midjourney, Copilot, Perplexity |
| Detection | Invoices, network traffic, SSO | Harder: often free, web-based, no invoices |
| Data risk | Data stored on third-party servers | Data potentially used to train models |
| Regulatory framework | GDPR, NIS2 | GDPR + EU AI Act + NIS2 + DORA |
| Velocity | Gradual adoption | Explosive adoption (weeks, not months) |
Shadow AI stands apart from traditional Shadow IT in three key ways:
The data paints a clear picture of the scale of the problem:
When an employee pastes source code, a confidential contract, or customer data into an unapproved AI tool, that information leaves the company's security perimeter. Worse still: some AI providers use input data to train their models, as the WeTransfer case revealed.
Shadow AI exposes companies to multiple penalties:
| Regulation | Obligation | Maximum Penalty |
|---|---|---|
| GDPR | Consent, data minimization, processing records | 4% of global revenue or €20M |
| EU AI Act | AI system classification, risk assessment, transparency | 7% of revenue or €35M |
| NIS2 | Digital supply chain security | €10M or 2% of revenue |
| DORA | Digital operational resilience (financial sector) | Financial penalties + personal liability for executives |
The CNIL explicitly recommends that companies establish an "AI processing register" and conduct data protection impact assessments (DPIAs) for any large-scale processing of personal data by AI tools.
Uncontrolled AI tools can introduce vulnerabilities: malicious browser extensions, unaudited plugins, exposed APIs. For CIOs, Shadow AI is an additional attack vector that must be factored into the cybersecurity strategy.
Without proper guidance, AI usage becomes fragmented: each team uses different tools, best practices don't spread, and results are inconsistent. AI becomes a source of complexity rather than a productivity lever.
Under NIS2 and DORA, executives are personally liable for digital risk governance. Not knowing that unauthorized AI tools are being used is no longer a valid defense.
Detecting Shadow AI is more complex than detecting traditional Shadow IT. Here are the complementary methods:
Cross-reference OAuth and SAML connections against your approved application directory. Limitation: only detects tools that require authentication.
Deploy a browser extension that detects AI sites and applications visited by employees. This is the most effective method for Shadow AI because it captures web-based usage that leaves no other trace. Avanoo uses this approach through its extensions.
Examine proxy and firewall logs to identify domains associated with AI services. Limitation: HTTPS traffic makes content inspection difficult.
Ask teams about their actual usage. Useful as a supplement but insufficient on its own: employees systematically underreport unauthorized usage.
Avanoo cross-references all four data sources to provide a complete Shadow AI map in under 15 minutes. The platform identifies each AI tool, the number of users, usage frequency, and associated risk level.
Ban all unauthorized AI tools. This approach consistently fails: employees bypass blocks using personal devices, VPNs, or non-corporate networks. The company loses all visibility.
A three-step approach, which we detail in our article on Shadow AI as a strategic opportunity:
Go beyond mere reaction: define a proactive AI strategy with an approved tool catalog, usage guidelines by use case, and rapid evaluation processes for new tools. The SaaS Manager plays a central role in this approach.
Deploy a Shadow AI discovery tool to get a complete inventory. Avanoo provides this mapping in under 15 minutes.
For each tool detected, assess:
Define clear policies by category:
Deploy awareness campaigns based on real usage data. Employees better understand the stakes when they see concrete numbers from their own organization.
Shadow AI is not a one-off project. New tools appear every week. Continuous analytics monitoring is essential to maintain compliance and adapt policies.
What exactly is Shadow AI?
Shadow AI refers to the use of artificial intelligence tools (ChatGPT, Claude, Copilot, Midjourney, etc.) by employees without IT department approval or oversight. Unlike traditional Shadow IT involving SaaS applications, Shadow AI is harder to detect because these tools are often free and accessible from a simple web browser.
How widespread is Shadow AI in France?
According to an Odoxa study for Microsoft (2024), 75% of French professionals already use generative AI tools at work, mostly without formal approval. Avanoo's clients discover on average 8.65 times more AI tools than they thought they had.
Is Shadow AI illegal?
Not in itself, but unmanaged usage can lead to violations of the GDPR (transferring personal data to non-EU servers), the EU AI Act (using AI systems without risk classification), or sector-specific regulations (NIS2, DORA in financial services). Penalties can reach 7% of global revenue.
How can I detect Shadow AI in my company?
The most effective method combines identity provider (SSO) analysis, browser extension deployment, network traffic analysis, and team surveys. Avanoo automates this detection and provides a complete inventory in under 15 minutes.
Should all unapproved AI tools be banned?
No. The "ban and block" approach consistently fails. Employees work around restrictions and the company loses all visibility. The recommended approach is "discover, classify, govern": map usage, classify risks, and manage usage with clear policies and approved alternatives.
What are the EU AI Act's obligations for businesses?
The EU AI Act requires companies to classify the AI systems they use by risk level (unacceptable, high, limited, minimal), document high-risk usage, ensure transparency for users, and implement appropriate governance. Using unmapped AI tools makes compliance with these obligations impossible.
Shadow AI is the most pressing technology governance challenge for European businesses in 2026. Regulations are tightening (EU AI Act, NIS2, DORA), data leak risks are growing, and executives are personally liable.
The good news: companies that take this seriously now are positioning themselves ahead of the curve. By mapping usage, classifying risks, and managing rather than banning, they turn Shadow AI from a blind spot into a performance driver.
Discover how Avanoo can map your Shadow AI in 15 minutes →
Shadow AI Expert & Chief AI Officer
Olivia Dubois is Shadow AI Expert and Chief AI Officer at Avanoo. An HEC Paris graduate and former BCG consultant, she helps enterprises detect and govern Shadow AI and Shadow IT.
See how Avanoo can map your SaaS and AI landscape, reduce risk, and optimize costs. A reliable platform with dedicated human support.