Professional. Personal. Passionate.

AI in the Workplace Part 1: Avoiding Title VII Discrimination Liability

On Behalf of | Sep 23, 2024 | Firm News

By: Abhishek Ramaswami on behalf of Cadogan Law | September 2024

In May 2023, the Equal Employment Opportunity Commission (“EEOC”) released a guidance publication titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which is focused on preventing discrimination against job seekers and workers. According to the EEOC guidance, AI can discriminate in violation of Title VII of the Civil Rights Act of 1964 (Title VII).

Title IV Concerns under EEOC:

Employers now have a plethora of AI tools at their disposal to streamline the process of hiring, firing, recruiting, discipline, retention, and performance evaluations, amongst others. This increased efficiency also has the potential to lead to biases and unintentional discrimination– creating liability for employers who utilize these tools. As such, employers must be cognizant of allowing AI programs to run afoul of Title VII. Title VII prohibits employment discrimination based on race, color, religion, age (40 or older), sex (biological sex, pregnancy, sexual orientation and gender identity), or national origin. The EEOC makes clear that these laws apply to the use of AI and other new technologies in employment just as they apply to other employment practices.

The focus of AI is generally not on intentional discrimination, but rather, if a facially neutral AI program that an employer uses has a disparate impact on protected groups. To provide some examples, AI can include facial expression evaluation technology, resume screening, virtual assistant or chatbots, employee key-stroke monitoring software, testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game, test, or quiz, or employee monitoring or “rating” software, among others.

The EEOC guidance contained the following notable points:

Selection Procedures: The EEOC’s guidance makes clear that the EEOC treats employer use of algorithmic decision-making tools as an employment selection procedure under Title VII. Selection procedure is defined as any “measure, combination of measures, or procedure” if it is used as a basis for an employment decision. Consequently, the use of algorithmic decision-making tools to “make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees” is subject to the EEOC’s Uniform Guidelines on Employee Selection Procedures under Title VII.

As a result, employers must pay careful attention to how they utilize algorithmic decision-making or other AI tools to verify that they do not result in a disparate impact under Title IV. The EEOC settled its first AI case in August 2023 against iTutorGroup on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring system it used “automatically rejected female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age.”

However, even with an AI-generated adverse impact that would normally be violative of Title VII, an employer can potentially avoid liability if it can establish that the use of algorithmic decision-making tools is considered “job-related and consistent with business necessity,” and there is no less-discriminatory alternative that is equally effective. If the employer can establish this, the burden then shifts back to the employee or EEOC to show that there is a less discriminatory alternative available.

The ”Four-Fifths” Rule: As a rule of thumb, employers can utilize the “four-fifths rule” when analyzing adverse impact with respect to algorithmic decision-making tools as self-assessment. Under the four-fifths rule, adverse impact is generally indicated when one rate is substantially different than another if their ratio is less than four-fifths (or 80%).

For example, if a personality test tool selects 25 out of 50 white candidates and 10 out of 30 African American candidates, the selection rate for white candidates is 50% (25/50) and the selection rate for African American candidates is 33% (10/30). The ratio of the two selection rates in this example is 66% (33/50). Because 66% is less than 80% (four-fifths), the four-fifths rule says that the selection rate for African American applicants is substantially different than the selection rate for White applicants in this example, which could be evidence of discrimination against African American applicants. It should be emphasized that the four-fifths rule is a rule of thumb and the EEOC cautions that it may not be appropriate in every circumstance.

Employer Liability for Third-Party Vendors. The EEOC guidance makes clear that an employer can still be liable for any adverse or disparate impact of AI tools, even if they are using a program administered or developed by a third-party vendor, such as a software developer. Consequently, employers may be liable for the actions of their agents, if the employer has given them authority to act on the employer’s behalf. For example, if an employer relies on the results of a selection procedure administered by a third-party vendor on behalf of the employer, the employer might be responsible, even if the vendor represented to the employer that their AI programs do not result in discriminatory practices. Though not a guarantee of avoidance of liability, the EEOC recommends that employers ask third parties what metrics they have used to assess whether their algorithmic decision-making tools result in adverse impact.

As a side note, vendors themselves could be liable under Title VII as well, under a theory that vendors are acting as an employment agency, indirect employer, or agent of the employer. The extent of vendor liability in these circumstances is currently being litigated in Mobley v. Workday.

Mitigating an Adverse Impact. As a mitigating factor, in the event that an employer discovers that its algorithmic decision-making tool does result in disparate impact that violates Title VII, the EEOC indicates that the employer can take steps to potentially avoid its liability. The EEOC recommends that if an employer discovers that their AI tool is potentially problematic under Title VII, then the employer should take proactive steps to reduce the discriminatory impact, discontinue use altogether, select a different tool that does not engage in the discriminatory practices, or modify the algorithm tools in the development stage. Failure to adopt a less discriminatory algorithm that was considered during the development process therefore may give rise to liability.

Some of the more conspicuous examples provided by the EEOC of how AI can unintentionally discriminate include:

  • Video interviewing software analyzes applicants’ speech patterns to reach conclusions about their ability to solve problems and scores an applicant low when the applicant has different speech patterns due to a disability.
  • Monitoring software includes facial recognition that is less accurate for darker skin tones, leading to Black employees being more likely to be terminated.

 

Key Takeaways:

  1. Title VII applies to AI employment-related tools the same way it does to other employment practices.

 

  1. AI does not take away the need for human oversight and intervention. If a selection process that uses AI could be found to have a disparate impact based on a protected characteristic in violation of Title VII, employers can be held liable regardless of intent.

 

  1. Employers must diligently vet and audit their third-party AI vendors and the tools they use. Employers can be held liable for any adverse impact caused by AI programs that are utilized or designed by third party AI vendors, and employers cannot rely on the AI vendor’s own predictions or analysis of whether their AI tools will cause adverse impact on protected groups. If adverse impact is potentially detected, employers should eliminate the use of the tool or re-design the tool, as a failure to do so could result in employer liability under the EEOC guidance.

 

  1. Employers should frequently assess the impact of AI programs that make or inform employment decisions and, where there is disparate/adverse impact, follow the EEOC guidelines. An employer can potentially avoid liability if it can establish that the use of algorithmic decision-making tools is considered “job-related and consistent with business necessity,” and there is no less-discriminatory alternative that is equally effective.

 

As more and more businesses and individuals utilize AI, and the laws governing the use of AI in the workplace rapidly evolve, it is important to ensure that you are up to date on, and compliant with, the latest laws and regulations in this sphere. We are happy to assist you if you have any questions about AI in the workplace, or any other employment law or business matter. Please contact us to schedule a free consultation.

Biography: Abhishek Ramaswami is an Associate Attorney at Cadogan Law, practicing in all aspects of Labor and Employment Law from the agency level through trial, as well as Business Litigation. He is licensed to practice law in Florida, New York, and New Jersey, with pending licensure in Illinois. In his spare time, he enjoys sports, cooking, the outdoors, and traveling with his wife.