By Olivia Dubois
·
November 26, 2025
Shadow AI is rapidly becoming the major challenge for enterprises. As artificial intelligence finds its way into every tool and workflow, employees adopt AI solutions without informing IT or seeking authorization. The result: a massive blind spot that organizations struggle to control.
According to an Odoxa study for Microsoft (2024), 75% of French professionals already use generative AI tools at work, and the majority do so without formal approval from their IT department. The phenomenon is global: Gartner estimates that more than 50% of enterprises are using generative AI in 2025, up from just 5% in 2023.
Three main factors explain this silent explosion:
Productivity pressure: Teams are under pressure to deliver faster. Generative AI tools (ChatGPT, Claude, Copilot, etc.) promise immediate gains. Employees adopt them spontaneously, without waiting for an official strategy.
Insufficient official tools: Many organizations have yet to deploy internal or approved alternatives. The gap is filled by consumer-grade solutions, free or paid, used outside any governance framework.
The failure of "ban and block" policies: Simply banning access to AI tools is counterproductive. Employees work around restrictions (VPNs, personal devices, non-corporate networks), and the organization loses all visibility.
The EU AI Act, which has been progressively entering into force since 2024, requires enterprises to map and classify the AI systems they use — making Shadow AI not only risky, but potentially illegal.
The lack of Shadow AI control creates concrete, quantifiable, and often underestimated risks. The IBM Cost of a Data Breach 2024 report puts the average cost of a data breach at $4.88 million, an all-time high.
| Risk Type | Potential Impact | Mitigation |
|---|---|---|
| Data leaks | Client data, contracts, and source code sent to third-party servers outside the security perimeter | Real-time mapping of all AI tools in use |
| GDPR non-compliance | Fines up to 4% of global annual revenue, notification obligations | Automatic risk classification per tool |
| EU AI Act non-compliance | Penalties up to €35M or 7% of annual revenue | AI system registry and risk assessment |
| NIS2/DORA non-compliance | Financial penalties and personal liability for executives | Governance and oversight of the digital supply chain |
| IP loss | Code, strategies, and content feeding third-party models | Usage policies and approved alternatives |
| Chaotic productivity | Fragmented usage, absent best practices, reliance on unsupported tools | Standardization and usage analytics |
The CNIL explicitly recommends that enterprises establish an "AI processing registry" and conduct impact assessments (DPIA) for any large-scale processing of personal data by AI tools.
Avanoo offers a structured approach to transform Shadow AI from a blind spot into a managed opportunity.
Avanoo maps in real time all AI tools used across the organization. Through a discreet browser extension and advanced detection mechanisms, the platform identifies actual usage — including those that escape traditional monitoring tools.
Where classic Shadow IT concerns unapproved SaaS applications, Shadow AI adds an extra dimension: AI tools are often free, accessible from a simple browser, and leave no trace in traditional billing systems.
Once usage is visible, Avanoo enables pragmatic governance:
Shadow AI is also a cultural issue. Avanoo supports training and awareness messaging through its engagement campaigns, relying on real usage data. Employees better understand the stakes and adopt best practices.
How do I detect Shadow AI in my organization?
Detecting Shadow AI requires a combination of tools: network traffic analysis, browser extension, integration with identity providers (Azure AD, Google Workspace, Okta). Avanoo automates this detection and provides a complete inventory in under 15 minutes.
What's the difference between Shadow IT and Shadow AI?
Shadow IT refers to any SaaS application used without IT department approval. Shadow AI is a specific subset: it concerns artificial intelligence tools (ChatGPT, Claude, Midjourney, etc.) used without governance. Shadow AI is harder to detect because these tools are often free and web-based.
Is Shadow AI illegal?
Not per se, but ungoverned usage can lead to violations of GDPR (personal data transfers to servers outside the EU), the EU AI Act (using AI systems without risk classification), or sector-specific regulations (NIS2, DORA in the financial sector).
Shadow AI is not a technology problem: it's a visibility problem. As long as the organization doesn't know which AI tools are being used, by whom, and for what, it can neither secure its data nor steer its AI strategy. Avanoo provides this visibility and the levers to turn it into an asset rather than a source of risk.
Organizations that take this seriously now position themselves not only in compliance with upcoming regulations, but also ahead of their competitors in AI governance. Discover how Avanoo can help.
Shadow AI Expert & Chief AI Officer
Olivia Dubois is Shadow AI Expert and Chief AI Officer at Avanoo. An HEC Paris graduate and former BCG consultant, she helps enterprises detect and govern Shadow AI and Shadow IT.
See how Avanoo can map your SaaS and AI landscape, reduce risk, and optimize costs. A reliable platform with dedicated human support.