The Different Levels of Shadow AI Prevention

Shadow AI is one of the biggest issues in AI governance and safety for any organization. As a quick definition, shadow AI is simply the use of AI in the workplace, either in violation of an established AI policy or in the absence of an established AI policy. 

Before we get into the remediation, let me say a couple of words about the sources of Shadow AI. A year ago I spoke to a lot of organizations whose AI use policy simply banned the use of AI in the workplace. I rarely have those conversations now as virtually every organization is trying to figure out how to deploy AI, not limit its use. There are two places where shadow AI is cropping up in 2026:

The first is when the “official AI implementation” is locked down so hard by the cyber security team that off-the-shelf AI products simply work better.

The prototypical case of this, which I’ve heard dozens of times, is that an organization buys licenses for Microsoft Co-Pilot, gives very little training on its safe or effective use, and then it gets locked down by the security team so tight that members of the team end up just using the free version of ChatGPT or whatever they have access to on their phone.

The second big source of shadow AI is when policies for approval or disapproval of new AI systems are too slow. Let’s say someone on the marketing team hears about a great new AI product on Twitter. It costs $20 a month. They can quite easily put it on their corporate card or, if we’re being honest, they can even just put it on their personal card if it makes their work that much better. Now, this person definitely knows that there’s an approvals process but if that approvals process is something that drags on for months they’re simply going to circumvent it. The gold standard for approval or disapproval of new AI systems in the organization has to be two weeks. If your organization can’t get approvals of AI systems that quickly you need to change your processes. 

 

So what are most organizations doing to prevent shadow AI? 

  1. Heavy-handed IP tracking. Many organizations, particularly larger organizations with large privacy and security teams, are using tools to track IP addresses of popular AI systems and flag any that are not approved for use. I don’t know that I will say this is a particularly bad idea and we have many clients that are doing exactly this. That said, in my opinion it’s never a great situation when compliance has to be done this way. 
  2. Reviewing low-level spend. As mentioned previously, one of the most prominent ways that Shadow AI is used in the organization is when a very low-cost AI tool is simply bought on a corporate card. Clearly, large investments in organization-wide AI systems like Microsoft Co-pilot are going to go through procurement, but most organizations have a level under which procurement is not necessarily involved. It’s a fairly simple matter to have your CFO run a monthly report that pulls out any new AI subscriptions. 
  3. Better and faster approval processes. Now we’re getting to the carrots rather than the sticks and the more effective ways of dealing with the challenge. Shadow AI is functionally a policy problem where the approvals process can’t keep up with the speed of technological change. That’s a governance issue, and it’s one that leading organizations are addressing by creating clear lines of responsibility for quickly evaluating and approving new AI systems. As mentioned before, the gold standard for this is two weeks. If you have people waiting longer than that for their systems to be approved, you’re adding risk. 
  4. Training. If you’re a regular reader, you know that this is a consistent theme. The most sophisticated IP tracking system simply isn’t going to be able to keep up with how this technology evolves and how it’s used in the workplace. We have to have great policies for AI use that specify not only the dangers we’re trying to avoid, but the benefits we’re trying to get. And we have to train on that consistently throughout our organization. If the only training that your organization has received is how to use Microsoft Copilot, your organization is both falling behind and creating risks of shadow AI. I hate to always pick on them, but Microsoft Copilot is not enough. 

About the Author

John Rood is the founder of Proceptual, where he helps organizations build practical AI governance systems that actually work. He has taught AI governance at Michigan State University and the University of Chicago, and his writing has appeared in HR Brew and Tech Target. He has spoken at the national SHRM conference and works with organizations ranging from startups to private equity portfolio companies on AI implementation and governance.

liz

Subscribe to Our Newsletter

Stay updated with the latest in AI training, compliance insights, and new course launches—delivered straight to your inbox.