A recent McKinsey report finds that companies are underprepared for risks associated with using AI tools.  It goes on to break down the different risks associated with these tools, and at the top of the list was regulatory risks.

 

 

The report goes on to find that high performers in AI adoption are proactively mitigating risk around the tools that they use.

 

 

So, how do you get the most out of your AI tools while mitigating risks around them?  By deeply understanding the way that they work and the outputs that come from them.

 

 

Regulatory risks are becoming one of the most immediate concerns that companies face with AI use.  With New York City passing Local Law 144, which mandates independent audits for automated employment decision tools (AEDTs), and more and more states investigating how to ensure AI and machine learning aren’t introducing bias into hiring practices, companies could have a patchwork of regulations they will need to comply with.

 

 

The risks are substantial.  With NYC LL 144, for example, employers could face fines of up to $1500 per offense per day for failing to comply.

 

 

Luckily, there are options.  And for a company to be truly prepared, action ought to be proactive.

 

 

Step 1 – Independent audits

 

 

It all starts with understanding how your tools have delivered outcomes in the past.  With NYC LL 144, that audit is mandatory for many cases (see our other blog posts for more details), but for many others it may not yet be mandatory.  That said, it is still an extremely valuable step.

 

 

An independent audit can look at factors ranging from race, ethnicity, region, disability, or more – it all depends on the data you are collecting.  Regulations may dictate the factors you choose, you may decide to go broader than that, or, in the absence of regulations, select the ones that you think matter most to your goals.  

 

 

Either way, “explainability” is key – you should understand how your AI tools are making recommendations.  Once you know that, then you can identify opportunities to improve from your audit data.   

 

 

There are other elements that should be considered prior to an audit, such as “Traceability”.  You should understand the data that has been used to train your AI tools, as well as the source and integrity of the data your team has been feeding it.  If not, that should be an expected output of an audit.

 

 

“Accountability” also needs to be documented and maintained.  Who is accountable for what aspects of the AI tools you are using?

 

 

Lastly, consider the term “discoverability”.  During the audit process, an ultimate goal should be that you know how recommendations are made, how your tools were trained and programmed, and who is responsible for maintaining the AI tools.  This will allow you to identify where to focus if changes or improvements need to be made.   

 

 

Step 2 – Analyze and normalize the data

 

 

Okay, so you now understand how your AI tools work and how they have been providing outputs in the past.  You can now identify trends in that historical data and assess whether or not there is opportunity for changes or improvements.

 

 

There are many ways to do this, and it will likely depend on your industry, location, and more.  But you should recognize trends that seem off and normalize them to standards to test if they are indeed off.

 

 

For example, if your data shows that 90% of the employees you hired hold Bachelor degrees, but that only 10% of your applicants holding Master degrees were hired, you might think that is off at first glance.  But if you look deeper and find that only 10 people with a Master degree applied, while 900 people with a Bachelor degree applied, you can see how a disparity like that can easily happen.  

 

 

In that case, it’s about statistical significance.  But there could be many other scenarios that play out that could require a deeper understanding.

 

 

Also, be sure to include members of your team involved in diversity, equity and inclusion (DEI) initiatives.

 

 

Step 3 – Act on the data

 

 

After you have analyzed the results and understand where there are opportunities to act or improve, it is time to set goals.  For example, if you find that there was bias in your data, it may be time to retrain your AI.  Or if you find that you hired a lower proportion of applicants of a particular group than reside in your region, you may aim to retool your recruitment efforts to be more inclusive.

 

 

All that said, it’s important to look first to any diversity, equity and inclusion (DEI) initiatives your team may have, as new goals will likely align with their efforts.  In fact, the data you gain from your audit could provide them with a trove of valuable insights.

 

 

Above and beyond

 

 

It makes sense to peg your AI efforts as much to risk mitigation as you would to value maximization.  They are essentially the same, especially as regulations around those tools mount.

 

 

The NIST Risk Management Framework, for example, details an approach of preparing your organization to deal with new tools, then determining processes around regularly monitoring and assessing those tools for risks (they are also working on a specific framework for AI governance).  

 

 

Your approach to AI should be no different, as these tools are ever evolving.  As with most activities, it is better to be proactive than reactive.

 

 

Want some help with this? Reach out to Ken Hellberg at ken@proceptual.com for more tools.