Last week we got the chance to sit down with attorney Mark Girouard of law firm Nilan Johnson Lewis. Mark has 20+ years of experience focused on legal aspects of pre-hire assessments, and he is a thought leader on how AI will change the landscape.

We discuss:

  • The longstanding legal frameworks surrounding use of pre-hire assessments
  • How AI is changing the landscape and how new regulation will create new compliance obligations
  • What HR leaders and execs can do now to prepare — and what to do if they find disparate impact in AI-driven tools

In particular, HR leaders and executives looking to stay compliant with emerging regulation of automated hiring tools will benefit — especially with NYC Local Law 144 (AEDT) going into effect on July 5.

Transcript

0:00:00
Welcome, everybody. This is John Rood, founder and CEO of Proceptual. I am happy to be joined today by Mark Girouard from Nilan Johnson Lewis, an attorney. Hi, Mark. Hi, John. Well, Mark, thanks for joining me today. We’re planning today to talk for, you know, maybe 20, 30 minutes. And our discussion today is going to be a kind of wide-ranging and casual conversation around the world of HR regulation, specifically around artificial intelligence, which I know is something that has been of interest to both of us. I’d love, Mark, maybe just give us a little bit of your background and a little bit of kind of your particular interest areas.

0:00:44
Yeah, certainly. So, John, thanks for the opportunity to chat today. So I’m an employment attorney here in Minneapolis at Nyland Johnson Lewis, as you mentioned. I’ve been practicing in this area for about 20 years now and primarily represent employers. So I guess what we would call management side employment law. One of my areas of specialty is around pre-employment selection and screening. So everything from, you know, personality tests to cognitive tests to other types of tools that employers use to assess potential talent through screening tools.

0:01:21
So that’s things like background checks and those kinds of processes. And I’d say over the last probably five years, maybe six years, as I’ve been advising clients on their use of pre-employment selection tools in particular, I’ve increasingly been getting questions about artificial intelligence-based selection tools as those have become, I’d say, more prevalent in the marketplace. So, I guess, with those questions, I began to have an interest in this area. And then over the last maybe three or four years, I’ve begun to see a number of municipalities, states, and then federal enforcement agencies begin to also take an interest in the use of AI in hiring and selection and beginning to issue some regulations and laws in this space as well.

0:02:15
Gotcha. That’s an amazing background. I think what I’d like to do is, I want to talk about AI in a second, but you’ve got such a wealth of experience going back, as you mentioned, decades. So as you think about, you know, your practice before the last three years when AI was a big part of this, I wonder if you could give just like a quick overview, like what are the core issues with pre-hiring assessment that I guess I would say an HR practitioner but not a legal expert should know about?

0:02:43
Yeah, so I’d say there are two bundles of issues and both of these stem from federal legal requirements. And the first set is around the potential for any selection tool, and that can be anything from a, you know, a structured interview to a pre-employment assessment, to have adverse impact against candidates based on race or gender. So the idea there is if a tool selects people from one race or ethnic group or one gender group at a statistically significantly different rate than members of another group, that is considered prima facie evidence of disparate impact discrimination. Once that evidence is present, employers can show that their tool is job-related and consistent with business necessity. This is a concept that, you know, I think in the legal setting, we call job-related and consistent with business necessity.

0:03:45
In the selection science space, I think you would call that validity is the term that’s used. But basically, it’s an opportunity to document that your tool is assessing for characteristics that are important for success in the job, that it is doing so with some degree of accuracy, and that there’s not another tool out there that would be equally valid but less likely to have adverse impact. So, that’s kind of the first bundle of issues in terms of employers, what it means kind of from a practical sense, is that they should be assessing whether their selection do have adverse impact based on race and gender at least once a year. And if so, make sure that they have documentation that the tools they’re using are actually valid for the purposes for which they’re using them. The other set of issues comes up under the Americans with Disabilities Act.

0:04:41
The ADA has certain requirements regarding when and how employers can use medical examinations or make medical inquiries in the hiring process. And there’s been a fair amount of attention and litigation, I think, in particular on personality assessments and whether those are effectively functioning as medical exams or screening for disability. So there are, you know, stages at which the process – sorry, stages in the hiring process during which medical exams can’t be used. In particular, they can’t be used until after a conditional job offer has been made. And then if they’re used, they have to be job-related. So there are, you know, I guess timing rules and, again, kind of validity rules around selection tools that might function effectively like a medical exam because they reveal information about disability.

0:05:42
Gotcha. Okay. That’s very interesting. I want to go back to something that you said. So you were talking about how if there is disparate impact used by a tool, that the employer can show that the tool both is job-related, but then also they have the burden of showing that the tool, that nothing else can be used in place of that tool. How can an employer prove that? Yeah, so I would say it’s not unusual. And, you know, if an employer is using – so let’s say it’s a cognitive test, or maybe make it a little more complicated than that.

0:06:19
So it’s a pre-employment test that includes some cognitive components. It includes maybe an inbox exercise or some other kind of test like that, and then maybe some personality scales. The vendor who developed that test should be able to prepare what’s called a validation study report to say, you know, we assessed your job, we determined what characteristics are important for that job, we developed a tool or put together components of a tool that predict for those important characteristics. And then in that discussion, you would say, and we explored other. And the question is, what’s that other? I know that that’s the part that I think is, I think, troubling for some employers because they say, you know, does that mean we also have to explore, would an interview be as effective? Would a, you know, a completely different type of tool be as effective and equally valid?

0:07:19
The law doesn’t require employers to go that far, but what it does require is that you, you know, when you’re looking at that, let’s say the combination of those different types of components we looked at, to look at, you know, is there a way we can weight the importance of those different parts of the test? Are there changes we can make to the scoring algorithm that’s going to lead to an eventual score on the test that are equally or close to as valid, but less likely to have adverse impact? So it’s pretty typical in one of these validation study reports to have a section where the vendor will talk about efforts they made to explore other mixes or flavors of the same test in a way to minimize the potential for adverse impact, but without sacrificing the validity and the predictiveness of the tool.

0:08:15
Gotcha. Okay, that makes sense. That’s great background. Let’s try to move over to a discussion about how your practices changed over the last three years as you’ve gotten more questions about AI. And I think that as our discussion progresses, we’ll probably dive into specific legislation. But one thing that was interesting to me about what you had said was we are now, I mean, we’re recording this in early 2023, and some of the laws are just going to affect now, the very early laws.

0:08:43
But over the last couple of years, what are the questions have you been getting from vendors and from employers specific to AI, and how have you been addressing them? Yeah, I would say, and I may go back a little further than the last couple of years, because I probably started getting these questions, I’d say maybe six or so years ago, and it’s when clients were looking at implementing a new selection tool, and they put it out for a request for proposals, and they started to get proposals from new players in the field, which were data sciences firms that had an artificial intelligence-based tool.

0:09:24
And I’d say in early days it was fairly rudimentary tools, so it was resume scraping tools or other things like that. And so the questions were kind of what is this thing? The vendor is telling us they can develop and implement it much more cheaply than a traditional assessment. And our HR folks are very excited about that or our procurement folks are very excited about that. And so what is this thing and can we be exploring it? I’d say I’d found over a couple of years I had a number of conversations with those vendors where I would say, you know, this seems like a great tool.

0:10:08
Can you show me what you can do to prepare a validation study report that’s compliant with federal standards and they would sort of scratch their heads and say well, our tool is valid, which, which has a really different meaning in the, the data sciences space than in the selection science space where it’s a legal term of art rather than a question of is the tool performing in the way you expect it to perform. I would say over a period of a few years, I saw the AI tool providers get much more sophisticated in this space where I think they were hearing these questions not just from me, but from other, you know, in-house and outside legal counsel. And many of them began to bring industrial organizational psychologists in-house who could kind of do the work of preparing a validation study report to show that their tool was, you know, was valid in a way that stood up under those federal legal standards. So, that was the first shift. And then I’d say the second shift has really been in the last, probably since 2019, as we’ve started to see more legislation and regulation in this space.

0:11:21
Okay. Yeah, that background makes a lot of sense. Let’s then dive into kind of the world of legislation and regulation as we see it today. I wonder if you can talk through, and this is a pretty broad question, so take it whichever way you want, but talk through some of the legislation that you’re watching carefully and then what I’d love to do is ultimately kind of get to what are the trends or common threads that vendors and then especially employers and HR leaders can be watching for? Yeah, and you know, I think, John, that’s a great question about the trends and threats because it really is something that has been evolving since, you know, I’d say the first step into this space was the state of Illinois, which adopted, it’s called the Illinois, I believe it’s the Artificial Intelligence Video Interview Act. I may be butchering the exact name there, but basically it was a law that regulated the use of artificial intelligence to score video interview results. And I’d say this came out of a lot of press around the time about AI-based selection tools that were using facial recognition to score candidates and really, I think, were using characteristics that were very hard to understand how they were actually related to performance on a job.

0:12:45
So, you know, the tool development may have shown that, you know, particular micro facial expressions in a large set of data might be associated with job performance. But it was very hard for people to understand how does that actually have anything to do with my ability to do the job. And so I’d say this Illinois Act was a bit of a reaction to the press coverage, because there was a significant amount at that time about these types of tools. But I think it laid out some characteristics that we’ve seen follow through in many of these laws. And the first is just that there be a disclosure to the candidate that artificial intelligence is being used in the selection process. The second, and this really gets back to that point about people not understanding how these were actually job-related, is that there has to be a disclosure about the types of characteristics for which the tool is screening. Under the Illinois statute, there’s a consent requirement, so the individual has to consent to the use of AI to score their video interview.

0:13:56
And then—and so I’d say that was initially kind of the bundle of issues we saw is this idea of, you know, transparency. So, that there’s notice that AI is being used. And then some sense of, I’d say, informed consent. So, you’re giving individuals enough information about what the tool, that both that the tool is being used and what it’s being used for that they can make an informed decision whether to opt out or not. More recently, we’ve seen the state of New York – I’m sorry, the city of New York adopt a local law around the use of what they call automated employment decision tools in hiring. And frankly, it has many of the same characteristics that we saw in the Illinois law, except it’s more it applies more broadly because it applies not just to video interviews scoring being scored by artificial intelligence but it applies more broadly to you know a whole range of artificial intelligence uses to make hiring decisions or promotional decisions. But again like the like the Illinois law it requires you know disclosure that AI is being used. It requires information about the characteristics that are being screened for. It’s a little bit different from the Illinois ordinance in that, or Illinois law in that rather than requiring consent, it requires that there be an opt-out so that an individual can opt out of or request accommodation from the use of artificial intelligence. And was new about this law is it also requires some, it has some reporting obligations. So, employers who use a automated employment decision tool are required to do an analysis of whether the tool has adverse impact based on race and gender. And then before they deploy the tool, they have to post the results of those analyses so that members of the public, you know, can go online and see is this a tool that’s likely to have adverse impact based on race or gender.

0:16:11
I’d say that that law started off with a kind of nebulous definition of artificial intelligence. It’s narrowed since the law was first drafted through a couple of sets of proposed rules. And I’d say it’s narrowed in a couple of ways. The first is, you know, an initial read, it looked like the law basically applied any time math was being used in a decision. So, just a simple scoring algorithm might be enough to get under the law. The proposed rules made clear that we’re really talking about the use of machine learning and similar tools where, you know, there’s a machine, not a person, deciding what the inputs are, deciding what the outputs are, refining those in a way that’s not, you know, that’s the machine doing that rather than a person doing that.

0:17:02
The second piece of the definition that I think has become more clear is that it’s really focused not on the use of AI in the selection process, but the use of AI to either actually make the decision or substantially influence the decision. So, it’s really, you know, if AI is the tiebreaker or if it’s, you know, the majority of the decision is made by AI, then it’s covered. If the AI tool is one data point in a much larger overall process and it’s not the predominant data point, it says, you know, you’ve got to tell us what your tool is screening for, or tell the world what your tool is screening for, and tell the world whether it has disparate impact based on race and gender. But it doesn’t embrace this concept that’s been around in kind of, you know, the law of pre-employment selection since the late 70s, that a tool can have adverse impact but still be lawful if it’s valid.

0:18:08
And so I’d say that’s something that’s sort of oddly missing from that law. I guess at the same time, it doesn’t require affirmative proof of validity. So, at least in terms of the burden on employers, they’re not required to make that affirmative showing, although frankly, they’ve already got that obligation under federal law. Yeah, and that’s interesting. So the, you know, part of the work that we do here at Perceptual is that we do that audit and compliance work related to New York 144.

0:18:39
And one of the questions that we get a lot before we start working with clients is they say, you know, we’re going to do this audit and we’re going to put the results on our website. One of the funky things about New York 144 is that it doesn’t have really a pass fail feature, right? The passes, you do this audit, you put it on the website, regardless of what the data shows. But then of course, the next question that they’ll have is what happens if we do this audit and it shows that there is adverse impact, we put it on our website, we satisfy the state of New York, but what happens then?

0:19:12
How do you counsel a client in that situation? Yeah, so I think in those situations, I point back to their obligations under the federal law, and I say, you know, technically under the federal law, your obligations aren’t triggered unless the tool is shown to have adverse impact. But my advice is if you’re using a tool, and especially using a tool at scale, It’s better to have done the validity study before you roll the tool out rather than after you’ve rolled it out, find out it has adverse impact, and then are scrambling after the fact to show that it was always already valid. So, what I’ve been advising clients is, you know, now that you’re going to have to be posting these data online, you better have done that validity study work in advance, even though the New York Ordinance doesn’t require it. I am certain that plaintiffs’ attorneys will be scraping employers’ career sites to see what they’re saying about adverse impact and gearing up to bring claims under federal law, because now they will have evidence that the tool had adverse impact that they won’t already have had. So, if the employer is in a place where when the plaintiff’s attorney comes knocking, they can say, yes, and here’s our validation study report showing that this is a valid tool, that allows them to hopefully head off those claims rather than, again, having to scramble to try to demonstrate that the tool is valid after they’ve already gotten a claim.

0:20:42
Okay. Yeah, that’s very, that’s super helpful. Let’s turn with the last couple of minutes that we have. So we’ve talked about this New York law, we’ve talked about the Illinois law. What else is on your radar? What are you watching closely? Yeah, I’d say there’s maybe three areas I’m watching. The first is, as I mentioned before, the federal enforcement agencies have started to take an interest in this area. I know the EEOC and the OFCCP are both looking into developing guidelines around the use of artificial intelligence in hiring and in other talent management practices. So, I wouldn’t be surprised to see those coming out soon.

0:21:25
In fact, the EEOC late last year already came out with some guidance around the potential of artificial intelligence to disparately impact people with disabilities. And kind of in line with that general idea of transparency, they are suggesting that employers provide information about the AI-based tools that they use that would allow someone with a disability to know whether or not they should request a reasonable accommodation. So we’ve already got the EEOC starting to move into that space.

0:21:59
I’m certain they’re going to move more fully into that space with additional guidance. The second piece I’d say is we see what I call copycat legislation. I know there has been proposed legislation in New Hampshire and New Jersey, for example, that tracks the New York City Ordinance almost word for word. So we’re going to see similar standards. The third, and this is the one I’m probably paying the most attention to, is California currently has a proposed law. I think it’s being heard by a committee next, well, I guess the 11th, so just in less than a week. And that one is interesting because I’d say for two reasons.

0:22:41
The first, well, three reasons. that it applies both to employers and assessment providers, so, or AI tool providers. So, it doesn’t just put the onus of compliance on the employer. And I should say, I think of it in terms of employment. It actually applies across the board to all kinds of things, housing, criminal justice, sort of you name it, every aspect of life, it would apply to the use of AI, but one of those aspects is employment. So, again, I’d say the first piece of that is that – well, I mean, let me take a step back. So, the things I find interesting about it are, first, that unlike these other statutes, it actually requires some discussion of validity. The second, which I guess is the first one I mentioned earlier, is that it applies both to employers and assessment providers. And then the third is that it has, I would say, broader application in terms of how it defines artificial intelligence, although like the New York City Ordinance, it is still limited to tools that are really, you know, outcome determinative rather than being a single data point.

0:24:06
But it does have a broader definition of artificial intelligence sitting behind that. And then, and I know I’m probably going over three, but there’s actually a lot of interesting aspects to the law. Another is that it doesn’t focus just on the potential for disparate impact based on race and gender, but it includes a number of other protected characteristics that employers will have to show that their tool doesn’t have a potential for adverse impact against those groups, which I think may be difficult for many employers to do because they’re characteristics that aren’t normally tracked in the application process. And in fact, in many states, it would be unlawful to track those in the application process. So I think that’s going to be an interesting conundrum for employers to work through is how do they show that their tool, you know, doesn’t have potential for disparate impact based on religion when they’re not tracking information about applicants’ religion at the time of application.

0:25:07
And then the last thing and probably the most worrisome is that, you know, the law will in 2025 with respect to the kind of compliance and reporting obligations that are very similar to the New York ordinance. But then in 2026, if adopted, it will create a private right of action. So unlike these other ordinances where it’s a civil enforcement scheme with a government agency overseeing it, the New York ordinance will allow for private lawsuits against employers for not complying with the requirements of the California law.

0:25:40
And the last one is… Go ahead. Yeah, sorry, go ahead, John. Well, that last one is really interesting. And so in your experience as, not just with AI, but over the last 20 years, how big of a deal is that private right of action? Like, does that really give the legislation more teeth or is it not as important? I’d say that’s a huge deal. You know, this is—so, some time ago there was a U.S. Supreme Court decision called Dukes versus Walmart, and I promise this will come back around. But what the Dukes decision said is, you know, there are many types of employment decisions that don’t lend themselves to treatment in a class action because they’re individualized decisions involving the discretion of individual decision makers. But certain things, like a background check or a pre-employment assessment, because it’s an across-the-board practice or policy, do lend themselves to big statewide or nationwide class actions. And I’d say after the Duke’s decision, I saw, I’d say, a heightening of focus on plaintiffs’ class on, you know, plaintiff’s class action firms focusing on things like pre-employment assessments and bringing big nationwide class actions, focus on pre-employment assessments, criminal background check policies, things like that.

0:27:04
Because after Dukes, you could still bring a big state or nationwide class action on those bases. this change or this proposed California ordinance as being a place where we are gonna see a ton of litigation because it’s gonna open the door for California plaintiffs firms to bring statewide class actions against employers who are using AI based selection tools. And I think in terms of takeaways, I know we talked earlier about potential takeaways for folks in HR.

0:27:43
I have been surprised to learn how many of our clients may be using AI somewhere in their sourcing, recruiting, selection, screening, or post-hire talent management processes that don’t actually realize they’re using it. And so that’s where I worry that many employers may be doing this already without realizing they’re doing it, which is going to make it very hard for them to get into compliance until they get that lawsuit from someone saying, you know, I was screened by this tool, as were tens of thousands of other people. So I’d say if there’s one cautionary note from all of this, it’s for business leaders and HR folks to really scrutinize the tools they’re using and reach out to their vendors if they don’t already know to make sure that, to make sure that AI is not being used or if it’s being used, that it’s being used in a way that’s compliant with these new legal regimes.

0:28:44
Well, Mark, I think we’ll let that advice be the last word. That’s super helpful. We covered a lot of ground today, past, present, and future. So I want to thank you for your time. And I always love to ask, what are the right clients for you? So if someone’s listening to this and in their organization, how do they know that Mark’s the right person to call? I would say, I have clients ranging from from small single site entities to multinational, you know, Fortune 50 companies.

0:29:18
So I’d say anyone that is using selection tools at scale are probably the right client. Awesome. Well, so we’ll leave your contact information, your LinkedIn, Mark. So thank you so much for your time. Really appreciate it. Yeah, John, thank you. Really appreciate the opportunity to talk about this. Thank you, really appreciate the opportunity to talk about this.