- The National Institute of Standards and Technology (NIST) is a Federal agency that provides technology, measurement, and standards that are used in everything from power grids to computer chips
- The framework they provided is relatively simple in terms of direction, but emphasized the substantial risk AI tools present if not managed properly
- The framework was designed to be flexible because we do not yet know all of the potential risks and hazards around AI tools
- The framework focuses on including a large community of stakeholders (actors) in the process and gives everyone a role
What is the NIST, and why is this important?
The National Institute of Standards and Technology (NIST), which is the Federal agency responsible for promoting “U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology, officially released its AI Risk Management Framework on January 26th, 2023. Before that, it had written several drafts, held multiple workshops, and collected public input – a journey that began in 2021.
New and emerging laws around automation will require a more stringent evaluation of the automated tools HR professionals use, both before and after they are adopted. NYC Local Law 144, for example, will require employers to perform third-party, independent ai bias audits for their automated employment decision tools (AEDT). California, Colorado, and more will be requiring similar requirements.
Key takeaways from the AI Risk Management Framework (RMF) text:
- The framework is designed to be flexible, so it can adapt to emerging risks over time
- Test, evaluation, verification, and validation (TEVV) processes throughout an AI lifecycle are key
- “Actors” across the AI lifecycle represent a tremendous range, anywhere from executives to communities – and their diverse range of perspectives are key
- “Trustworthy” AI is considered “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed”
- AI risk management is an ongoing process across all phases of using AI
- The appendix offers tasks that actors should be given while implementing the RMF
What HR professionals need to know
Overall, the framework is simplified in terms of direction, but targeted in its message – Any organization using AI should manage the risk of using those tools. Failing to do so could cost them productivity, liability, and fines.
It also reemphasizes what many of the new and emerging AI laws are stating – that we do not yet know all of the risks and hazards of AI bias audit tools. At the speed those tools are evolving it’s impossible to imagine what they will look like in the future.
The unique thing about this framework is its focus on including a large community of stakeholders (what they call actors) in the process. From internal employees to external regulators, NIST’s framework invites everybody to the table – which makes sense. With all the regulations emerging, companies will have to satisfy regulators on one hand while reassuring those processed through these tools on the other.
HR professionals will play an important role in the Risk Management process as end users of the tools that are being evaluated. By becoming familiar with the TEVV process proposed by NIST, they can help test and evaluate the risks themselves.
Curious about how to begin your AI Risk Management Plan? Contact Ken to learn more at: email@example.com