Artificial Intelligence in the Workforce

On June 7, 2022, Conn Maciel Carey LLP partners Kara Maciel and Jordan Schwartz interviewed EEOC Commissioner Keith Sonderling about the EEOC’s recent focus on Artificial Intelligence (AI) and its impact on workplace discrimination. 

AI refers to a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1]  It can feature in software used to complete tasks previously completed by human beings.  Relevant to the discussion with Commissioner Sonderling, employers can use AI in most employment and/or hiring decisions, such as who to inform about a new position, who to interview, and who to select for a position. 

When making those decisions, employers could suffer liability if they discriminate against an individual based on their race, color, religion, sex, national origin, age, pregnancy, disability status, or genetic information[2].  Unlawful discrimination can occur two ways – disparate treatment and disparate impact.  Disparate treatment occurs when individuals are intentionally discriminated against by an employer, whereas disparate impact refers to unintentional discrimination – where an employer’s neutral policies or procedures negatively impact individuals in a particular protected class.  

Employers should be aware, as Commissioner Sonderling stressed in his remarks, that AI technologies are only as good as the data and training used to develop them.  There have been numerous instances where employers who used AI tools to assist in employment and/or hiring decisions have been left with discriminatory results and potential disparate impact liability as a direct result of the technology.

Commissioner Sonderling offered some examples of ways that AI could unintentionally produce discriminatory results in employment decisions: