The Equal Employment Opportunity Commission (EEOC) announced a program in October 2021 to ensure that artificial intelligence (AI) applied in the workplace does not result in anti-discrimination violations. The EEOC intends to collect information about the design, adoption, and impact of hiring and employment-related technologies through an internal working group, hold listening sessions with key stakeholders and issue technical assistance on algorithmic fairness and the use of AI in employment decisions. The press announcement from the Equal Employment Opportunity Commission (EEOC) announcing the program may be seen here.
The news should come as no surprise to anyone who follows the EEOC’s activities. The EEOC’s interest in AI may be traced back to a public EEOC meeting in October 2016 that discussed the use of big data in the workplace.
At that discussion, employment lawyers, EEOC commissioners, and computer scientists agreed that AI should not be considered as a panacea for eliminating workplace discrimination. If not adequately deployed and managed, technology has the potential to introduce and even worsen unlawful bias. This is because most algorithms rely on a set of human inputs, such as high-performing employee resumes. If the inputs aren’t diverse enough, the algorithm could quickly reinforce existing institutional bias.
Commissioners of the Equal Employment Opportunity Commission recently stated that the agency is concerned about undisciplined AI implementation, which could perpetuate or exacerbate bias in the workplace. As a result, the EEOC may consider using commissioner charges—independent investigations initiated by the agency and unrelated to a discrimination charge filed by an employee—to ensure that employers are not using AI in an unlawful manner that violates Title VII of the Civil Rights Act (Title VII) or the Americans with Disabilities Act (ADA) (ADA).
Given the EEOC’s increased focus on AI, merchants who use the technology should take precautions to mitigate risks while maximizing benefits. This article will present an overview of the benefits and potential hazards of employment-related AI technology, as well as a refresher on EEOC charges and steps merchants, may take to limit their chance of being investigated by the EEOC.
Table of Contents
AI’s Advantages for Retailers
The field of artificial intelligence (AI) is constantly evolving. Some shops examine social media profiles using automated candidate sourcing technology to identify which job posts should be offered to specific applicants. Others employ video interview software to determine whether a candidate exhibits desired attributes by analyzing facial expressions, body language, and tone. However, the application is not limited to the employment procedure. Some stores use AI software to optimize their workforces, allowing the program to build employee schedules based on a variety of factors such as employee availability, local or regional pay and timekeeping laws, as well as business initiatives and seasonal swings.
Regardless of the specific tool, AI is pitched to retailers as a technical innovation that simplifies the hiring process, improves candidate quality, increases efficiency, and increases diversity.
Time is perhaps the most obvious of these advantages. For example, AI can save hiring managers endless hours of sifting through resumes in search of suitable applicants. This is especially true for larger stores, who receive tens of thousands of applications every year. That time saved can be put toward more productive endeavors.
AI may also expose shops to previously untapped talent pools, and with a broader pool of candidates to choose from, merchants should expect more diverse and qualified new hires. Furthermore, eliminating or limiting human decision-making can assist in the elimination of unconscious, or even purposeful, human biases in hiring, scheduling, and other employment-related decisions.
Discrimination Possibility
Although AI promises big rewards, it also comes with a lot of danger. Although AI tools are unlikely to discriminate unjustly, this does not absolve them of obligation. This is because the legislation covers both intentional (discriminatory treatment) and unintentional discrimination (disparate impact). The greater danger for AI is bogus impact claims. In such cases, the plaintiff’s purpose is irrelevant. The question is whether a seemingly neutral policy or practice (such as the use of an AI tool) has an absurd effect on a protected group, such as race, color, national origin, gender, or religion.
Because AI tools are so diverse, each sort of technology has its potential for prejudice. However, one recurrent theme is the potential for input data to have a discriminatory effect. To understand search parameters, many algorithms rely on a set of inputs. A resume screening tool, for example, is frequently built up by uploading sample resumes of high-performing employees. If the tool is asked to identify resumes that are similar to those on those resumes, the technology will most likely reinforce the existing uniformity.
Some examples are more subtle. Employees from certain zip codes with a majority of one race or color may appear on sample resumes. An AI program could favor particular zip codes while excluding applications from zip codes with different racial make-up. An algorithm’s preference for “.edu” email addresses may disqualify older candidates. In other words, if a company’s workforce is predominantly made up of one race or gender, relying on previous hiring decisions could hurt candidates of a different race or gender.
Commissioner Charges as a Tool for Investigating Discriminatory Effects of Artificial Intelligence
The EEOC is concerned about the possibility of AI rejecting hundreds or thousands of employment applications based on biased inputs or defective algorithms. Because job applicants are frequently unaware that they were rejected for certain positions due to faulty or improperly calibrated AI software, the EEOC may turn to commissioner charges to uncover unlawful bias under Title VII and the ADA, most likely under the heading of nonsense impact discrimination.
The Equal Employment Opportunity Commission (EEOC) is authorized under 42 USC 2000e-5(b) to examine suspected discrimination “submitted by or on behalf of a person claiming to be aggrieved, or by a member of the Commission.” (Italics added) Commissioner charges, unlike employee-initiated charges, can be proposed by “any person or group.” Commissioner charges are distinguished from employee-initiated charges by their origin.
Commissioner charges, according to the EEOC, arise when: 1) a field office learns of possible discrimination from local community leaders, direct observation, or a state-run fair employment office; 2) a field office learns of a possible pattern or practice of discrimination while investigating an employee charge, or 3) a commissioner learns of discrimination and requests an investigation.
Regional EEOC field offices submit proposed commissioner charge requests to the EEOC’s Executive Secretariat, which distributes them among the commissioners in a rotating fashion. After then, a commissioner decides whether or not to sign a proposed charge, allowing the field office to conduct an inquiry. Commissioners, on the other hand, can skip the referral process and file a charge with a regional field office directly.
Commissioner charges are brought in the same way as employee-initiated charges. The EEOC notifies the respondent of the charge and asks for documents and/or interviews with firm employees. The agency can use its administrative subpoena power to obtain evidence and seek judicial enforcement if necessary. The regulations of the Equal Employment Opportunity Commission state that the commissioner who signed the charge shall refrain from deciding on the matter.
If the EEOC concludes that there is reasonable cause to believe discrimination occurred, the agency will usually try to reach an agreement with the company. Individuals who have been wronged have the same options as those who have filed Title VII disparate effect claims: equitable justice in the form of back pay and/or injunctive action.
Steps to Reduce the Risks of Discrimination
Retailers should be aware of the EEOC’s awareness of the issue, as well as the option of commissioner charges to find disparate consequences without an employee charge. Retailers should take the following actions to prevent becoming the target of such investigations:
First, anyone considering using AI should require that companies provide enough information to explain how the machine makes employment judgments. Vendors frequently refuse to reveal private information about how their tools work or interpret data. Retailers may be held accountable for their outcomes, in the end, therefore they must understand how candidates are chosen. A retailer should, at the very least, secure significant indemnity rights.
Second, shops should consider auditing the AI tool before depending on it for judgments, even after obtaining assurances and indemnity. To accomplish so, shops must be able to identify not just the individuals who were accepted, but also those who were rejected by the tool. As a result, retailers should check with vendors to ensure that data is retained so that they can properly audit the tool and review the results to see if there was a detrimental impact on those in protected groups. This auditing should be done regularly, not just when the system is first put in place.
And perhaps most importantly, merchants should ensure that the tool’s input or training data (e.g., resumes of model employees) does not represent a homogeneous group. A correctly working algorithm should, in theory, reproduce or enhance the diversity of the input data if it reflects a varied workforce.
Finally, because this is a new industry, retailers must keep up with legal developments. When in doubt, firms should seek legal advice before selecting whether and how to use AI to boost their workforce’s efficiency, diversity, and competence.