You’re a privacy professional. Or maybe you’re in compliance. Either way, last week you got called into a meeting and told you’re now responsible for AI governance at your organization.
Congratulations?
Here’s what probably happened: Your CEO read something about the EU AI Act, or your board asked pointed questions about AI risk, or a major client started asking about your AI governance framework. And now you—the person who already has a full-time job managing data privacy or regulatory compliance—are supposed to figure this out.
No additional headcount. No clear mandate. Definitely no training.
I’ve worked with dozens of privacy and compliance professionals in exactly this position. The good news is that you already have many of the skills you need. The challenging part is that AI governance is different enough from traditional privacy and compliance work that you can’t just dust off your existing playbooks and call it done.
Here’s what actually works: a structured 90-day plan that gets you from “I have no idea where to start” to “I have a functioning AI governance system and the relationships to make it stick.”
Why 90 days?
This timeline assumes a few things about your situation:
This is your organization’s first formal AI governance project, not an update to existing documentation. You’re either new to the organization or new to this role. Your organization has what I’d call a medium risk profile and tolerance—you’re not a healthcare provider dealing with patient data, but you’re not a startup that can move fast and break things either.
If you’re in a heavily regulated industry or dealing with tight customer deadlines, you might compress this timeline. If you have a year to get this right and a small staff, you might expand it. But the sequence of activities stays roughly the same.
I’ve run this 90-day plan multiple times with clients. It’s a significant amount of work. It’s also absolutely doable.
Days 1-15: Understand the Landscape
Your first two weeks are about understanding what you’re actually dealing with.
Start by clarifying your mandate. What does senior management actually want from you? What decisions are you empowered to make versus what needs to go through a committee? Are you supposed to be the person who says “no” to risky AI projects, or are you supposed to be the person who figures out how to make them safer? These are very different roles, and you need to know which one you’re in.
Create your inventory of AI systems. This serves multiple purposes beyond just knowing what AI tools exist in your organization. It introduces you across functional areas if you’re new. It reveals shadow AI use—the teams using ChatGPT or Claude or other tools without permission because your approved system is locked down to the point of uselessness. And it’s work you can start immediately without waiting for approvals or committee formations.
The inventory will also help you understand where risk is most likely to concentrate. If you’re in healthcare, PII and HIPAA compliance are obvious high-risk areas. If you’re an HR software vendor, bias in AI systems should be keeping you up at night. If you’re in financial services, model explainability and fair lending are your pressure points. You’ll probably end up with two or three critical risk areas—write them down and keep them front of mind.
Start scheduling interviews with functional leaders. Marketing is probably using AI for content generation. Sales might be using it for email drafting or lead scoring. Engineering could be using it for code completion. Finance might have it embedded in forecasting tools. You need to talk to all of them, and you need to start building those relationships now.
Use these early conversations to understand shadow AI use throughout the organization. Some of this requires direct interviews where people feel safe being honest with you. Some of it requires looking at logs that your cybersecurity team can pull. Shadow AI isn’t really a crisis-level issue at most organizations, but you want to get a handle on it before it becomes one.
Days 16-30: Build Relationships and Create Your First Policy
Your second two weeks are about producing something tangible while continuing to build organizational relationships.
Conduct those interviews with functional leaders. You’re filling out your system inventory, yes. But more importantly, you’re learning how people actually think about AI in your organization. Who sees it as a competitive advantage? Who’s terrified of it? Who’s already tried to implement something and failed? These conversations will tell you where your allies are and where resistance will come from.
Create an internal AI use policy. Every organization needs this. Most either don’t have one or have one that’s so vague it’s useless. This is your chance to produce a real, usable policy quickly—usually in collaboration with HR, senior management, and your legal or compliance team.
The AI use policy should answer basic questions: What AI tools are employees allowed to use? What data can and cannot be put into AI systems? What approvals are needed before adopting new AI tools? What happens when someone violates the policy?
This policy will probably evolve over the next 60 days as you learn more, and that’s fine. Version 1.0 doesn’t need to be perfect. It needs to exist and provide clear guidance.
Start understanding organizational dynamics for your AI Steering Committee. You’re probably not going to make the final decision about who sits on this committee—that’s usually a senior leadership call. But you can start identifying who needs to be at the table. Who controls the budget for AI initiatives? Who owns the technical implementation? Who’s responsible for regulatory compliance? Who has credibility across functional areas and can help drive adoption?
Write down these names. Start building relationships with them. You’re going to need their buy-in.
Days 31-60: From Regulations to Controls to Policies
Month two is when the real governance work begins.
Specify roles and responsibilities throughout the organization. Again, you’re not making these decisions in a vacuum—you’re working with senior management and functional leaders. But someone needs to drive this process, and that someone is you. Who approves new AI tool purchases? Who’s responsible for conducting impact assessments? Who monitors for bias in AI outputs? Who handles vendor relationships?
These questions don’t have obvious answers in most organizations because AI governance is new. Traditional IT governance structures don’t quite fit. Data governance structures are close but not identical. You’re building something new, which means you have influence over how it’s designed.
Get buy-in for your final list of regulations and frameworks. The regulations side should be a discussion with legal counsel—they’ll tell you about the EU AI Act, various state-level AI laws, and industry-specific requirements. The frameworks side is a conversation with senior management about risk tolerance and how comprehensive your governance system needs to be.
Are you implementing ISO 42001? The NIST AI Risk Management Framework? Something lighter? This decision shapes everything that follows, so don’t rush it. But also don’t let perfect be the enemy of good. Most organizations are better served by a simpler framework they’ll actually use than a comprehensive one that sits on a shelf.
Create one source of truth for all your controls. Take all those regulations and frameworks and map them to specific controls. This is tedious work, but it’s essential. You need to know exactly what your organization is required to do and what it’s choosing to do beyond those requirements.
Map controls to policies—both new ones you’ll create and existing ones you’ll modify. Many organizations already have vendor management policies, data retention policies, and privacy policies. Some of those just need AI-specific addendums. Others need complete rewrites. Figure out which is which.
Create your core AI management policy. This is usually the second policy you’ll create, after the internal use policy. The AI management policy is the umbrella document that explains how AI governance works at your organization—the committee structure, the approval processes, the risk assessment methodology.
Develop your vendor questionnaire. You’ll eventually have a formal third-party AI vendor policy, but vendor relationships are too important to delay. Start creating the due diligence questions you’ll ask vendors: How is their model trained? What data are they using? How do they handle your organization’s data? What are their bias testing procedures? Do they have their own AI governance framework?
Days 61-90: Risk Assessments and Policy Approvals
Your third month is about formalizing everything you’ve built.
Create your gap assessment. This is the honest accounting of the difference between how your organization behaves today and how it needs to behave after your governance system is in place. Maybe you discover that three different departments are using AI tools that haven’t been vetted. Maybe you find that nobody’s doing impact assessments before deploying AI systems. Maybe you realize that vendor contracts don’t include AI-specific language.
Write it all down. Prioritize the gaps by risk level. Start figuring out how to close them.
Write formal risk management documentation for your highest-risk systems. If you have internally developed AI systems, they need impact assessments. If you’re using AI in high-stakes decisions—hiring, lending, healthcare—you need documentation of how you’re managing bias risk, explainability, and human oversight.
This doesn’t mean you need perfect documentation for every AI system in your organization by day 90. It means you need solid documentation for the systems that pose the most risk.
Draft your remaining policies and get them approved. You’ve probably created 4-6 policies at this point: internal use, AI management, vendor management, maybe data governance updates, maybe a bias testing policy. Now you need to socialize them with senior management and the steering committee.
This is where those relationships you’ve been building pay off. If functional leaders trust you and feel like they’ve had input into these policies, approval will be relatively smooth. If they feel like policies are being imposed on them, you’ll face resistance.
Day 90: Convene Your AI Steering Committee
On day 90, bring everyone together for the first official meeting of your AI Steering Committee.
The agenda should include strategic discussion about AI’s role in the organization, approval of the policies you’ve created, and planning for how the governance system rolls out to the broader organization. This meeting isn’t the finish line—it’s the starting gun for ongoing governance work.
But it’s also a milestone worth celebrating. Ninety days ago, you had nothing. Today, you have a functioning governance system with documented policies, identified risks, and organizational buy-in.
What We Deliberately Left Out
A few things didn’t make it into the 90-day plan, and that’s intentional.
Building the formal business case for AI governance. Depending on where you sit in the organization, you might need this early on. If you’re in cybersecurity or privacy and need to convince senior leadership that AI governance is necessary, put together a brief document highlighting regulatory requirements and market realities. But many organizations already know they need AI governance—they just don’t know how to build it. If that’s your situation, skip the business case and get to work.
Rolling out training on AI governance and AI literacy. This is critical work, but it doesn’t need to happen in your first 90 days. Get the governance structure in place first. Then worry about training the organization on how to use it.
Completing risk assessments on all AI systems. You should have documentation on your highest-risk systems by day 90. The lower-risk or lower-visibility systems can be backfilled over the next few months. Perfect is the enemy of done.
The Thing Nobody Tells You
Here’s what I can’t emphasize enough: forging collaborative relationships throughout your organization is more important than getting every policy technically perfect.
The biggest risk I see for AI governance professionals is earning a “Doctor No” reputation. If you become known as the person who just blocks AI initiatives and adds bureaucracy, you’ll lose influence quickly. People will work around you. Your policies will be ignored.
In your first 90 days, you want to be known as highly collaborative. You want to be the person who helps teams figure out how to use AI safely, not the person who just says it’s too risky. You want to be the person who understands business objectives and finds paths forward, not the person who cites regulations without understanding context.
This doesn’t mean being a pushover. It means understanding that your job is to enable AI use while managing risk, not to prevent AI use entirely.
What Happens After Day 90
Your governance system on day 90 won’t be perfect. It shouldn’t be.
What you’re building is a system that improves continuously over months, quarters, and years. You’ll discover gaps you didn’t anticipate. You’ll need to update policies as regulations evolve. You’ll refine your risk assessment processes as you learn what actually predicts problems.
That’s all normal. The goal isn’t perfection on day 90. The goal is having a functional system with room to grow.
Ready to Start Your 90 Days?
If you’re one of those privacy or compliance professionals who just got AI governance dropped on your desk, I know this looks like a lot. There’s a reason the dictionary has a special place for “easier said than done” next to this kind of work.
But it’s doable. I’ve seen it work dozens of times across organizations of different sizes, industries, and risk profiles.
The key is starting with a plan, building relationships as you go, and remembering that done is better than perfect.
And if you want help actually executing this plan—or if you realize your timeline is compressed and you need to move faster—that’s exactly the kind of work we do at Proceptual. We can help you build your governance system, create your policies, and train your organization to actually use them.
But whether you work with us or do this yourself, the most important thing is to start. Day 1 is today.
About the Author
John Rood is the founder of Proceptual, where he helps organizations build practical AI governance systems that actually work. He has taught AI governance at Michigan State University and the University of Chicago, and his writing has appeared in HR Brew and Tech Target. He has spoken at the national SHRM conference and works with organizations ranging from startups to private equity portfolio companies on AI implementation and governance.
