Subscribe to Our Newsletter

Stay updated with the latest in AI training, compliance insights, and new course launches—delivered straight to your inbox.

    calender
    August 25, 2023
    account
    John Rood

    How Should HR and Compliance Leaders Think About ALLLLLLL the Emerging AI Regulation?

    Well, it’s really happened — 2023 is the year of the proposed AI regulation in hiring and HR. That “proposed” is a big qualifier; right now, only New York Local Law 144 specifically regulates the use of AI and automated hiring tech.

    But we have seen a ton of proposed legislation introduced in the last 6 months in California, DC, Connecticut, New Jersey, Maine, Vermont, Washington, NY state, and, of course, at the federal level.

    While we here at Proceptual track these proposed laws, it’s overkill for HR departments to track bills that are very likely to either not pass, be substantially delayed, or be changed significantly.

    The federally proposed “No Robot Bosses Act” is a great example. It is sponsored by 2 Democratic senators plus Bernie Sanders. However, it’s not clear which if any Republicans will be on board.

    So, what should HR and compliance leaders be watching for? What we’re ultimately looking to get to is a set of protocols and behaviors that is likely to substantially satisfy most or all legislation that actually goes into effect. Here’s what we think that will look like based on what has been proposed. A company that was ready to take these steps will be 99% of the way to complying with substantially every law we’ve seen proposed.

    Requirement for annual independent audit of automated systems

    New York Local Law 144, by virtue of being first, has set the table stakes for regulation of AI systems by requiring annual independent audit of these systems. Most proposed regulations require some assessment conducted annually by a third party.

    While NYC 144 requires the production of disparate impact comparison charts and candidate notification. Other proposed laws will add:

    • An “impact assessment” that would describe the uses of a tool, and, often, a description of potential harms that may arise from the tool
    • A “pass/fail” mechanism by which tools that show disparate impact are not allowed to be deployed. (One of the primary criticisms of NYC144 has been that even a tool which shows gross disparate impact doesn’t technically need to be taken out of use)
    • A description of what mechanisms have been used to reduce bias and mitigate disparate impact

    For examples, see NJ A4909 or California AB 133

    Requirements surrounding employee tracking

    The previously-mentioned No Robot Bosses act is a good example of a trend we have seen in the past few months of legislation that focuses at least in part on protection of employee data and restriction on data collection.

    New York S07623 is a good example. This is a state-wide bill that requires independent audit much like NYC 144, but also adds in pretty significant restriction of electronic monitoring tools (EMT). This proposed law:

    • Defines a narrow set of allowable purposes for EMTs
    • Requires that the EMT be “strictly necessary” and the “least invasive means” of accomplishing that goal
    • Requires that the EMT collect as little data as possible on as few employees as possible to accomplish the goal

    Companies will face a pretty significant burden in certifying their systems for compliance, particularly in the “prove a negative” paradigm that there is not another less restrictive way to track employees.

    Draft privacy legislation in, for example, Maine and Colorado have similar provisions.

      And don’t forget the EEOC

      EEOC has been very clear in their recent guidance that new AI tools have the same legal obligations as all other hiring tools; they are also clear that the responsibility falls on the employer, not the vendor of the tool.

      That being the case, companies should be prepared to produce independent audit certifying that tools in use are not in violation of the 80% rule of thumb that has governed EEOC thinking for many decades.

      Proceptual provides compliance services for the evolving tapestry of regulations related to AI and HR.

      John Rood

      John is a sought-after expert on emerging compliance issues related to AI in hiring and HR. He has spoken at the national SHRM conference, and his writing has appeared in HR Brew, Tech Target, and other publications. Prior to Proceptual, John was founder at Next Step Test Preparation, which became a leader in the pre-medical test preparation industry before selling to private equity. He lives in the Chicago area and is a graduate of Michigan State University and the University of Chicago.

      Related Post May You Also Like

      Explore more insights, case studies, and practical tips on navigating AI. Stay ahead with our latest thought leadership.

      AI Compliance Obligations for EdTech: AI Regulation in Education

      September 27, 2024
      John Rood

      I spent over a decade in the EdTech world. One of the biggest challenges we all face selling into institutions,...

      HR Brew: Colorado’s New AI Law

      August 17, 2024
      John Rood

      We were pleased to be featured by HR Brew in their article on Colorado’s new comprehensive AI Law.

      Interview with Attorney Rob Szyba

      July 14, 2024
      John Rood

      We at Proceptual had the pleasure of discussing AI and employment law with attorney Rob Szyba of law firm Seyfarth...