Skip to Content

EEOC Issues New Guidance on Artificial Intelligence In Hiring

on Thursday, 8 June 2023 in Labor & Employment Law Update: Sarah M. Huyck, Editor

The U.S. Equal Employment Opportunity Commission (“EEOC”) recently issued guidance to employers who seek to use algorithmic artificial intelligence (“AI”) tools in recruitment.  While these tools may come with time-saving perks, the EEOC answers employers’ questions about how the use of AI could lead to employment discrimination under Title VII.

Artificial intelligence is a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.[1]  Employers are increasingly using artificial intelligence tools such as resume scanners, virtual assistants, or video interviewing software which can evaluate candidates’ applications, responses, and even facial expressions based upon keywords entered by the employer.

Title VII prohibits discrimination in employment based on race, color, religion, sex, or national origin.  The EEOC may evaluate selection procedures and look for adverse impacts discriminating against one of these protected characteristics.  A company’s selection procedures should be associated with job-related skills, and employers should avoid processes which would disproportionately exclude persons belonging to these protected groups.

In 1978, the EEOC issued original guidance for employers on how to determine if certain tests or selection procedures are lawful under Title VII.  This guidance was the Uniform Guidelines on Employee Selection Procedures (“Guidelines”). The agency now addresses questions on how those Guidelines may apply to technological assistance in hiring today. We summarize some of the pressing questions below.

  • Can use of an algorithmic tool be a “selection procedure” under the Guidelines?
    • The original guidelines will still apply when an algorithmic tool is used to inform decisions to hire, promote, terminate, or take similar action for applicants and employees.
  • Can employers assess their use of AI recruitment tools for adverse impact the same way they have for traditional selection procedures?
    • Employers should evaluate the proportion of applicants hired or promoted from various groups and even combinations of characteristics to determine whether use of the tool may violate Title VII.
  • What is the “four-fifths rule” and how could it help a company evaluate their selection procedures and tools?
    • The four-fifths rule is a method for an employer to evaluate whether their selection rate of one group is substantially different from another. The employer should aim for a ratio of 4/5, or 80%, selection between groups (ex: 30% selection rate of group one, 60% selection rate of group two, 30 / 60 = 50% which does not meet the 80% ratio standard).  The four-fifths ratio is a rule of thumb and is not appropriate in all situations, so employers should confirm whether their software uses the four-fifths rule in selecting employees.
  • Can an employer be liable even when an outside company created the software?
    • Even when an outside vendor has created the recruitment tool, employers may still be responsible for selection procedures which violate Title VII.  Employers should ask vendors what steps have been taken to prevent discriminatory selection procedures, evaluate whether discriminatory software is consistent with a business necessity, and seek out alternative software that may have less disparate impact.
  • What should we do if a decision-making tool is found to have an adverse impact?
    • If a tool is found to have an adverse impact on a group protected by Title VII, employers should take steps to reduce or eliminate the effect. An employer may adjust the algorithm or select an entirely new tool.  Failure to adopt a less discriminatory tool may give rise to liability.  Employers should consistently conduct self-analyses to proactively avoid disparate treatment of protected groups under Title VII.

In sum, do not rely on assertions from a vendor that their software or app is lawful.  Confirm whether the software or app has been validated (i.e., evaluated to determine whether it actually measures what it purports to measure or predict). Even if validated, employers should conduct their own analysis to determine whether use of the test nevertheless may have a disparate impact on a protected group. 

 

Kelli P. Lieurance
Matt A. Robinson, Summer Associate

 

[1] National Artificial Intelligence Initiative Act of 2020 § 5002(3), https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210

1700 Farnam Street | Suite 1500 | Omaha, NE 68102 | 402.344.0500