One year ago, HR leaders I spoke to usually did not have internal AI use policies and were rightly concerned. Today, HR leaders I speak to have a policy – but they know that many individuals in their company are circumventing it, and they’re still rightly concerned.
This troubling evolution represents the newest challenge facing businesses as they navigate AI adoption: the rise of shadow AI.
From No Policy to Shadow AI
The conversation with HR leaders has shifted dramatically over the past 12 months. The initial concern – having no AI governance framework whatsoever – has been largely addressed. Most orgs now have policies in place. However, a new problem has emerged: Employees are simply working around those policies.
Shadow AI is when employees use artificial intelligence tools or applications without formal approval or oversight from the IT department, often turning to popular platforms like ChatGPT to enhance productivity.
The Numbers Behind Shadow AI Are Staggering
Recent research reveals the scope of this challenge:
- 27% of employees have worked with AI tools that were not authorized by their company
- 50% of employees are using unauthorized AI tools, with nearly half indicating they would continue even if explicitly banned
- By 2030, more than 40% of global organizations will suffer security and compliance incidents due to the use of unauthorized AI tools
Even more concerning, executives have the highest levels of regular shadow AI use, suggesting that this isn’t just a problem with junior employees – it’s company-wide.
The Microsoft Copilot Paradox
In my experience, I’ve identified a critical pattern: 80% of the time, the approved system is Microsoft Copilot, which infosec, privacy, and governance teams have locked down within an inch of its life. This creates a perfect storm for shadow AI adoption.
Shadow AI happens when the team realizes that free ChatGPT is objectively better than the corporate-approved system.
The challenge isn’t theoretical. When organizations sign large-scale enterprise Copilot adoption deals, there’s often internal resistance because employees want ChatGPT over Copilot for daily AI assistance, undermining the effectiveness of enterprise mandates.
Why does this happen? Several factors contribute to employee frustration with enterprise AI tools:
1. Restrictive Guardrails Limit Functionality
Copilot’s orchestration layers, regulatory guardrails, and limited context windows improve compliance but reduce AI’s reasoning capability compared to base models. This “safety tax” means that while the enterprise tool is more compliant, it’s objectively less powerful for everyday tasks.
2. Complex Deployment Creates Friction
Copilot’s effectiveness depends heavily on an organization’s Microsoft 365 ecosystem maturity, data quality, and operational practices. Many companies rush deployment without proper preparation, leading to poor user experiences.
3. Adoption Challenges Go Unaddressed
Only 50% of companies have rolled out Copilot to all employees, and even fewer report widespread adoption. When employees struggle with approved tools, they naturally seek alternatives.
4. Value Perception Gap
Some customers question whether they’re getting sufficient value per user per month, which has held back further adoption and driven employees toward free alternatives.
The Security Risks Are Real
While employees turn to unauthorized AI tools for legitimate productivity reasons, many generative AI tools store user inputs to improve their models, meaning the AI provider could retain and access sensitive corporate data.
The risks include:
- Data Loss: 11% of data employees paste into ChatGPT is considered confidential
- Compliance Violations: Using AI without proper oversight can lead to violations of regulations like GDPR or HIPAA
- IP Exposure: Proprietary business information, source code, or trade secrets submitted to GenAI platforms may be retained indefinitely
- Lack of Audit Trails: Unauthorized tools don’t provide the compliance documentation companies need
Why Shadow AI Happens: The Root Causes
Understanding why employees circumvent corporate AI policies is essential to solving the problem…
Employees Prioritize Productivity
Employees often turn to AI tools to enhance productivity and expedite processes. When corporate tools create friction rather than eliminating it, employees find workarounds.
The Corporate Version Falls Short
Orgs often take too long enabling new technologies, and the corporate version of the product has been hardened and restricted beyond usefulness. This is exactly what I’ve observed – free ChatGPT is objectively better for many use cases than heavily locked-down Copilot.
Lack of Understanding About Risks
Employees are focused on getting the job done, so if organizations try to restrain them, they will find a way to do what they need to do. Often, they don’t fully understand the security implications of their choices.
Insufficient Training
Only 1 in 3 organizations offer structured generative AI training, despite 48% acknowledging it’s essential for adoption success. Without proper training, employees can’t effectively use approved tools or understand why restrictions exist.
The Path Forward: From Restriction to Enablement
Simply banning AI tools isn’t feasible. Employees will always find ways to use the tools they believe make them more productive, whether they’re sanctioned or not. Instead, organizations need a comprehensive approach that balances security with productivity.
1. Develop Clear AI Governance
Businesses need practical AI governance frameworks that employees can actually follow. This includes:
- Clear policies on approved tools and use cases
- Transparent explanations for why certain restrictions exist
- Regular reviews to ensure policies keep pace with technology
2. Provide Approved Tools That Actually Work
If the approved system is objectively worse than free alternatives, employees will bypass it. Companies should:
- Evaluate whether locked-down enterprise tools meet actual user needs
- Consider whether security restrictions are proportionate to risks
- Balance compliance requirements with usability
3. Invest in Comprehensive AI Training
Effective shadow AI management requires incremental governance, employee engagement, cross-department collaboration, and regular auditing. Training is the foundation that makes all of this possible.
Training should cover three critical disciplines:
- AI Literacy: Understanding what AI is, how it works, and its limitations
- Effective AI Use: Learning to prompt effectively and integrate AI into workflows
- AI Safety and Security: Understanding risks, compliance requirements, and best practices
4. Create Champions and Communities
Orgs that successfully deploy AI tools focus on:
- Identifying champions who can evangelize effective use
- Building communities where employees share tips and prompts
- Celebrating success stories that demonstrate value
5. Monitor and Measure Thoughtfully
Rather than creating a culture of surveillance, companies should:
- Use analytics to understand usage patterns
- Identify where approved tools fall short
- Continuously improve based on employee feedback
Take Action: Build AI Literacy Across Your Organization
The shadow AI challenge reveals a fundamental truth: policies alone aren’t enough. Organizations need to empower their workforce with the knowledge and skills to use AI safely, securely, and effectively.
Training is the missing link between AI adoption and AI governance. When employees understand both the capabilities and the risks of AI tools, they’re better equipped to make responsible choices that serve both productivity and security goals.
Ready to address shadow AI at your business? Start by building foundational AI literacy across your workforce. We offer a free AI literacy certification course that covers essential concepts every employee should understand about working with AI safely and effectively.
By investing in education and creating practical governance frameworks, organizations can transform shadow AI from a security risk into an opportunity for responsible innovation. The question isn’t whether employees will use AI – it’s whether they’ll use it in ways that protect your organization while driving real business value.
Sources/Additional Reading:
- IBM – “What is shadow AI?”
- 1Password – “1Password Annual Report 2025: The Access-Trust Gap”
- Software AG – “Shadow AI: The Hidden Risk in Your Organization”
- Gartner – AI Governance Predictions
- Cyberhaven – “Data Exposure in ChatGPT Study”
- Infosecurity Magazine – “One in Four Employees Use Unapproved Tools”
- CIO.com – “Managing Shadow AI Risk”
- McKinsey – “The State of AI in 2024”
- UNLEASH – “Half of workers use unauthorized AI at work and don’t want to quit: Software AG”
