Managing Shadow AI Through Leadership and Governance
The biggest risk from unsanctioned AI in your organisation probably isn't what you think it is. It's not a rogue engineer plugging a language model into…
The biggest risk from unsanctioned AI in your organisation probably isn't what you think it is. It's not a rogue engineer plugging a language model into production systems. It's not someone building an autonomous agent in a broom cupboard. It's someone in procurement using ChatGPT to summarise a contract. Someone in HR asking Claude to flag inconsistencies in a policy document. Someone on your leadership team pasting board minutes into a chatbot to get a quick thematic analysis.
Grace Trinidad, research director in IDC's Security and Trust practice, puts it bluntly: "The biggest risk is actually in this very benign, though misguided, usage."
That's the uncomfortable bit. The people creating your shadow AI problem are, almost without exception, trying to do good work faster. They're not being negligent. They're being resourceful. And that makes this a much harder problem to solve than if they were simply breaking rules for the sake of it.
The gap is probably wider than you think
IDC estimates that between 50% and 70% of all AI usage in organisations occurs through unsanctioned tools. Sit with that number for a moment. If you've spent months building an AI governance framework, there's a decent chance it covers less than half of the actual AI activity happening under your roof.
And here's the kicker: without a governance solution already in place, you can't even measure the true rate. You're governing what you can see, while the majority of usage happens where you can't.
Jayesh Chaurasia, a senior analyst at Forrester, traces the root causes to three gaps: no AI usage policies, no inventory tracking, and no simple risk assessment workflows. But he also flags a subtler problem. Even when organisations do have technical controls, those controls become useless the moment employees shift to personal devices or tools the company doesn't provide. When people feel blocked from using AI at work, they find workarounds. And workarounds, by definition, sit outside your line of sight.
This isn't a technology failure. It's a design failure. If your governance approach starts with "no" and doesn't offer a credible alternative, you're not reducing risk. You're just pushing it somewhere you can't monitor.
A four-phase framework worth thinking with
The source article outlines a practical governance approach built around four phases. It's not the only way to tackle this, but the structure is sensible enough to be worth walking through. Not as a template to copy wholesale, but as a set of questions to hold up against your own approach.
1. Discovery
You can't govern what you haven't mapped. This phase is about finding out what AI tools employees are actually using, how they're using them, and what data is flowing where. For most organisations, this step alone will surface surprises. If you haven't done a proper discovery exercise, the distance between your governance documentation and your operational reality is almost certainly larger than you'd like.
2. Policy
Once you know what's happening, you can write rules that reflect the real world rather than the one you assumed you were operating in. Policies created without discovery tend to be either too broad (blocking useful tools people need) or too narrow (missing the tools people are actually using).
3. Monitoring
Policies without visibility are just wishes. This phase builds the ongoing capability to track AI usage across the organisation, catching new tools as they appear and flagging data flows that fall outside acceptable boundaries.
4. Protection
Real-time safeguards that prevent sensitive data from leaving the building. This is where technical controls earn their keep, but only if the previous three phases have laid the groundwork.
The order matters. Jumping straight to protection without doing discovery first is like installing a burglar alarm without checking which doors are unlocked.
What this actually means for senior leaders
The instinct in many organisations is to treat shadow AI as an IT security problem. Hand it to the CISO, write a policy, block a few URLs, move on. But that misses the point. Shadow AI is a signal that your people want to use these tools and your organisation hasn't given them a sanctioned way to do it. That's a leadership problem, not a firewall problem.
The organisations getting this right are the ones treating governance not as a barrier but as infrastructure. They're asking: how do we make the approved path easier and faster than the shadow path? Because if the sanctioned tools are slower, clunkier, or harder to access, people will keep finding their own way. Every time.
If you're a senior leader reading this and you haven't yet run a proper discovery exercise, that's the place to start. Not with policy. Not with technology procurement. With an honest look at what's actually happening across your teams. The framework described in the VKTR article is one useful lens for structuring that work.
One thing to try this week: ask five people across different functions whether they've used any AI tool in the last month that wasn't provided or approved by the company. Don't make it an inquisition. Make it a conversation. The answers will tell you more about your governance gap than any audit report.

