- The Federal Equal Opportunity Employment Commission has identified bias in automated hiring systems as one of its top 3 priorities for 2023. Yesterday, it held a 5-hour public testimony, advancing the regulatory process
- Independent, third party audits are a core part of mitigating the risk of using algorithmic tools in the employment process, and experts urged the EEOC to require them
- The ability to express how an AI system reached a particular decision, recommendation, or prediction (AKA Explainability) is also crucial
- Algorithmic tools, including those using AI, are ever-evolving – and so must the governance plans companies have in place for them
Today the Equal Employment Opportunity Commission (EEOC) held a meeting titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier”. With three different panels composed of a dozen industry experts, the session provided a broad view of what legislators should be considered as they grapple with regulating emerging AI automated hiring systems risks.
There were three main messages that spanned the sessions, however.
- Third-party, independent automated audits performed on a periodic basis are necessary to ensure these evolving technologies remain unbiased
- The data that is used to train these tools is just as important as the tools themselves, as they can perpetuate or even exacerbate inequity
- “AI made me do it” is not a defense. Businesses must have governance plans in place to mitigate risk and benefit most from their algorithmic tools
The panel of experts presented a cross-disciplinary view of these topics.
While only NYC has mandated third-party, independent audits of the automated employment tools employers use, this session left the impression that these audits may become the de facto strategy to mitigate risks for using algorithmic tools, including those using AI and/or machine learning.
Explainability and validity were also mentioned multiple times. Although they seem synonymous, there is a big difference. According to this McKinsey article, explainability is the “capacity to express why an AI system reached a particular decision, recommendation, or prediction.” Validation, on the other hand, is a “method of ensuring that a system works as intended and designed, fulfilling its objectives in the context.”
In sum, a well-rounded risk mitigation could include:
- Upfront validation tests to make sure that tools can deliver the expected results
- Upfront data analysis, to ensure that the data used to train these tools are in line with expected outcomes
- Periodic updates if anything with the tools change to ensure explainability can be maintained
- Regular audits for automated tools to make sure their outcomes are unbiased
Of course, as these tools become more sophisticated and take on more tasks, there could be an evolution in how we adapt to those risks. In the meeting, it was clear that companies will not be able to blame the automated hiring tools or their vendors, and that companies would be accountable for outcomes. This is why it is critical that employers not only assess these tools up front but continue to evaluate them over time.
NOTE – These are takeaways from the hearings and not intended to be a framework for algorithmic tool governance. Please contact us for more information on how to implement a structured plan that will help you mitigate future risk.