- Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems could have a profound impact on how automated HR tools are regulated while expanding the scope of liability
- California Consumer Privacy Act (CCPA) / California Privacy Rights Act (CPRA) indirectly regulates AI use in HR, and will require businesses to create new processes to comply – effective 01/01/2023
- It is up to California employers to produce explainability of how their tools are making recommendations, and they may also be required to perform risk assessments or cybersecurity audits
Now that New York City has passed NYC Local Law 144, California may be one of the next places to regulate the use of automated hiring and employment tools. Although NYC refers to them as automated employment decision tools (AEDTs), the definition and scope of the tools falling under the umbrella of these regulations remain ambiguous.
Below is a list of the current and pending legislation in California AI law, and the implications of each in the human resources field:
Fair Employment & Housing Council Draft Modifications to Employment Regulations Regarding Automated-Decision Systems
The California Fair Employment and Housing Council, which enforces California’s employment non-discrimination laws, is floating updates to its current rules to include the regulation of “automated decision systems” (ADS). Specifically, it will be putting the responsibility of ensuring an ADS is within compliance with the employer, much like NYC LL 144.
Although the council has said that adopting these modifications isn’t imminent, it may be spurred to act as regulations in other states pass or as a particularly important court case, Raines v. U.S. Healthworks Medical Group, makes its way through the court system.
Like CPRA (below), these rules would require employers to understand how their tools are making recommendations. However, this will also require them to ensure that these tools aren’t biased against any protected class.
What’s interesting in this proposed legislation is that the liability extends to “agents” of the employer, such as outsourced recruiters. This implies that employers would not only have to verify their tools aren’t biased but that they should ensure that their partners or contractors have proper AI monitoring and governance processes in place as well.
Legislation in place:
California Consumer Privacy Act (CCPA) / California Privacy Rights Act (CPRA)
Why the double name? Because CPRA, AKA CCPA 2.0, essentially piggybacked onto the CCPA in 2020 when it was voted into law as a ballot measure.
Together, they are the first comprehensive consumer privacy legislation in the US, and have had a tremendous impact.
According to Morgan Lewis’s 2023 Checklist for CPRA Compliance, companies impacted by this legislation could need to create processes for correcting consumers’ personal information, allowing consumers to opt out of advertising and sharing, providing disclosed information if requested, processing sensitive information, and more. They may also need to perform risk assessments, independent cybersecurity audits, and regular data purges.
CPRA regulates AI indirectly through the use of the term “automated decision-making technology”. In terms of how it applies to human resources, it could include allowing customers to opt out or even providing them with the logic behind the decision that the automated tool made. That last bit is extremely important because it has a lot to do with the exploitability of the tools being used. It also puts more emphasis on employers having internal AI governance in place.
CPRA went into effect as of 01/01/2023
California Senate Bill 1216
California Senate Bill 1216 (CA SB 1216) proposes an evaluation of the risks that content forgery or deepfake technologies pose to the government, businesses, or residents of California. That evaluation would also include an assessment of the feasibility of mitigating these risks, and a report on the dangers those risks pose.
Reports are due by 10/01/2024.