The U.S. Department of the Treasury released two AI governance resources on February 19, 2026: an AI Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). You’ve probably seen the coverage. Here’s a tighter read on what’s actually in it.
What’s in the Framework
The FS AI RMF was built through a public-private collaboration involving more than 100 financial institutions, the Financial Services Sector Coordinating Council (FSSCC), and the Cyber Risk Institute (CRI). It’s part of a 6-resource series Treasury is rolling out this year covering governance, data integrity, fraud, and operational resilience.
The framework has 4 components:
AI Adoption Stage Questionnaire. A self-assessment that places your institution on the AI adoption spectrum before assigning control expectations. The framework doesn’t treat a community bank and a multinational the same way, which is one of its more sensible design choices.
Risk and Control Matrix (RCM). 230 control objectives, organized by adoption stage, covering governance, data management, model development, validation, monitoring, third-party risk, and consumer protection. Controls are tiered, so you’re not expected to implement production-scale oversight if you’re still running pilots.
Guidebook. Step-by-step implementation guidance for operationalizing the controls. The “how do we actually do this” document.
Control Objective Reference Guide. 400-plus pages of specific evidence examples that examiners and auditors would expect to see. If your institution gets examined on AI, this is the document your compliance team needs to know cold.
Treasury released the AI Lexicon alongside the framework to standardize terminology across institutions, regulators, and vendors. The goal: get legal, risk, engineering, and product teams actually speaking the same language when they discuss AI risks.
How It Differs from the NIST AI RMF
The NIST AI Risk Management Framework is the foundational AI governance framework in the US. But NIST was built to apply across every sector simultaneously, which means it can’t go deep on any one of them.
The FS AI RMF fills that gap in 3 concrete ways.
Principles become controls. NIST tells you to manage bias. The FS AI RMF gives you specific, auditable control objectives with evidence requirements an examiner can verify. That’s a meaningful difference when you’re preparing for a supervisory review.
Built for examination. Financial institutions operate under regulatory examination. The Control Objective Reference Guide was designed with that reality baked in. NIST wasn’t, and it shows.
Sector-specific risk coverage. Credit decisioning bias with fair lending implications, model-driven fraud detection, algorithmic trading with systemic market effects, third-party model dependencies that can create concentration risk across the whole financial system. These risks get addressed directly. A generic framework doesn’t have to.
The two aren’t in competition. The FS AI RMF is structurally aligned with NIST and also designed to interoperate with the NIST Cybersecurity Framework, SOC 2, and existing enterprise risk programs. Prior NIST AI RMF work is a foundation to build on.
What to Do With This
The instinct will be to treat this as compliance guidance to file and revisit later. That’ll hurt you.
State regulators look to frameworks like this to define emerging best practice, even where federal enforcement is lighter. The institutions that bolt these controls into their governance programs now will be in a materially better position when examinations catch up to actual AI deployment.
Start with the AI Adoption Stage Questionnaire. It’s freely available through the Cyber Risk Institute at cyberriskinstitute.org. Getting honest about where you actually are is the step most organizations have been skipping.
Questions about what the FS AI RMF means for your AI governance program? Proceptual helps organizations build practical governance systems that hold up under real scrutiny.
About the Author
John Rood is the founder of Proceptual, where he helps organizations build practical AI governance systems that actually work. He has taught AI governance at Michigan State University and the University of Chicago. His writing has appeared in HR Brew and Tech Target, and he has spoken at the national SHRM conference.
