How Avanoo revealed and secured the massive, invisible use of generative AI in a regulated environment.
Financial institutions face strict regulatory requirements around data security and governance. The rise of generative AI tools has created a new challenge: employees using ChatGPT, Claude, Copilot, and other solutions without declaration or approval. This case study describes how Avanoo enabled a bank to discover 214 undeclared AI tools and secure its environment in under 140 days.
The bank had identified 7 AI tools officially approved or under evaluation. But across more than 9,000 workstations, actual usage remained unknown. The risks were significant: client data exposure, regulatory non-compliance, and intellectual property loss.
Avanoo deployed a discreet browser extension across all workstations to detect actual AI tool usage.
The platform identified in real time access to generative AI applications (ChatGPT, Claude, Copilot, Midjourney, etc.), as well as hundreds of third-party tools integrating AI capabilities.
Each tool was classified by risk level: data hosting, privacy policy, sector-specific compliance. The 37 highest-risk tools were blocked immediately.
Training and targeted messaging were deployed to inform employees about usage rules and approved alternatives.
The bank now has full visibility into Shadow AI and has secured its environment while managing legitimate usage.
In a regulated sector, Shadow AI poses a major operational and regulatory risk. Avanoo revealed the scale of the phenomenon and enabled structured action. In under 140 days, the bank took control of its AI usage and significantly reduced its risk exposure.
See how Avanoo can map your SaaS and AI landscape, reduce risk, and optimize costs. A reliable platform with dedicated human support.