By Olivia Dubois
·
May 4, 2026
The EU AI Act turns generative AI into an operational governance topic. For CIOs, CISOs, DPOs, and business leaders, the question is no longer only which AI tools can accelerate work, but which uses are visible, documented, secure, and compliant.
The European regulation follows a risk-based approach. Most generative assistants are not automatically high-risk systems, but their actual context of use can move them into a more demanding category: hiring, scoring, healthcare, education, access to essential services, or automation of decisions that affect individuals.
The EU AI Act classifies AI systems into four risk levels:
| Level | Example Use Case | Enterprise Consequence |
|---|---|---|
| Minimal risk | Internal drafting support without sensitive data | Few or no specific obligations |
| Limited risk | Chatbot, content generation, customer assistant | Transparency, user information, prevention of illegal content |
| High risk | Hiring, credit, education, critical infrastructure | Risk management, documentation, human oversight, logging, continuous compliance |
| Unacceptable risk | Social scoring, manipulation, prohibited biometric use | Prohibited |
This approach requires organizations to assess each use case in its real context. The same model may be low risk when summarizing an internal memo and much more sensitive when it influences an HR decision or processes personal data at scale.
General-purpose AI systems, often called GPAI, have their own obligations: technical documentation, copyright compliance, information about training data, and, for models with systemic risk, stronger evaluations and cybersecurity measures.
For an enterprise using AI, however, the main challenge is controlling usage. Employees can access dozens of AI tools without procurement, SSO, or IT approval. This is the core of Shadow AI: the company becomes accountable for uses it may not yet be able to see.
Preparing for the EU AI Act is not just about writing another policy. The regulatory obligations need to become controls that can be verified.
Start by inventorying the AI tools in use, the users involved, the data exposed, and the business purpose. The inventory should cover approved tools, SaaS subscriptions, browser extensions, AI agents, APIs, and personal accounts used in a professional context.
Without this mapping, it is impossible to know whether a use case is limited risk, high risk, or prohibited.
Classification should start with the business process, not just the vendor. AI used to generate marketing ideas does not carry the same obligations as AI used to rank job candidates.
For each use case, document:
This mirrors the principles of enterprise AI governance: accountability, transparency, data security, and continuous improvement.
The EU AI Act quickly intersects with GDPR, NIS2, and DORA. Teams need to verify where data is processed, whether prompts can be used to train models, which logs are retained, and which contractual guarantees exist.
The challenge does not stop with the direct provider. An AI tool may depend on a cloud host, third-party model, observability service, payment processor, or other technical subprocessors. Mapping that subprocessor chain becomes a prerequisite for assessing both compliance and digital resilience.
For critical AI providers, the assessment should cover:
Human oversight cannot be theoretical. It should define who can validate, correct, suspend, or escalate an AI-generated result. For high-risk use cases, teams need to explain how a decision is controlled and when a human takes over.
Documentation and incident reporting require reliable traces: who uses which tool, for what purpose, with what type of data, and under which policy. Logs should help identify abnormal behavior, potential leaks, and deviations from internal rules.
Some prohibitions have applied since 2025. Other obligations, especially those related to high-risk systems and transparency, phase in during 2026 and 2027. Organizations that wait until the final deadline may discover too late that they have no complete inventory, no designated owners, and no proof of control.
The right starting point is a governance foundation: an AI usage register, a classification method, risk-based policies, and continuous discovery.
Before deploying or approving a generative AI tool, verify that:
Avanoo helps enterprises discover the AI and SaaS usage that actually exists across the organization, including unapproved tools. The platform helps teams map applications, identify risk, structure usage policies, and guide employees toward compliant practices.
Avanoo also goes beyond application inventory: teams can map providers, subprocessors, and critical dependencies through digital supply-chain mapping. This visibility supports compliance and digital resilience workflows: audit evidence, provider risk tracking, data location, business criticality, and remediation plans. To frame sovereignty risk across AI and SaaS providers, teams can also use the dedicated sovereign footprint resource.
The EU AI Act is not just a legal obligation. It is an opportunity to regain control over generative AI: know what is used, understand the risk, secure data, and give teams a clear framework to innovate without exposing the business.
Shadow AI Expert & Chief AI Officer
Olivia Dubois is Shadow AI Expert and Chief AI Officer at Avanoo. An HEC Paris graduate and former BCG consultant, she helps enterprises detect and govern Shadow AI and Shadow IT.
See how Avanoo can map your SaaS and AI landscape, reduce risk, and optimize costs. A reliable platform with dedicated human support.