Employers Beware: “There is No AI Exemption to the Laws on the Books”… 4 Steps to Consider When Using AI in the Workplace
Given the myriad ways artificial intelligence is now being used to streamline business processes, it’s no surprise that federal agencies are scrutinizing potential employment-related biases that can arise from using AI and algorithms in the workplace. Indeed, the Equal Employment Opportunity Commission (EEOC) Chair, Charlotte Burrows, recently called AI advancements a “new civil rights frontier.” Earlier this week, she joined the leaders of several other agencies to announce their joint position on the use of AI, as well as their commitment to education and enforcement efforts. The agency leaders made clear that while we may see new laws and regulations addressing AI use, existing civil rights laws already govern how these new technologies are used. “There is no AI exemption to the laws on the books,” noted Lina Khan, Chair of the Federal Trade Commission (FTC). So, what does this mean for employers? The time is ripe for you to review your policies and practices and consider performing an AI audit to flag and address potential biases in your systems. Here are four important steps you should consider taking when using AI in the workplace.
1. Recognize the Uptick in Workplace AI Use and Possible Enforcement Efforts
First, let’s define a few terms. The use of artificial intelligence (AI) generally means a computer is doing tasks that are typically performed by an employee. Algorithms are rule sets for computers based on patterns, which may be used, for example, in software programs for sales and marketing, as well as recruiting efforts.
In fact, a recent FP Flash Survey showed that employers most commonly use AI for HR recruiting (48%) followed by sales and marketing (46%). Other popular uses include operations and logistics (32%), customer service (24%), and finance and accounting (24%).
The survey results showed that a solid number of employers are using AI – and that number will grow in 2023. The overwhelming majority of employers said their use of AI has been effective — and by far, the key benefit has been increased efficiency. Businesses also report improved accuracy and cost savings. You can read more about the FP Flash Survey results here.
You should note, however, that federal enforcement agencies are keenly aware of the uptick in AI popularity. “We have come together to make clear that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws,” said EEOC Chair Burrows.
2. Consider Conducting an Audit to Identify and Address Potential Bias
While many excellent tools are available for streamlining recruiting and other workplace processes, relying on such technology to make employment decisions might unintentionally lead to discriminatory employment practices.
Although we’re sure to see new laws at the federal, state, and local level regarding the use of AI in the workplace, federal authorities have stated that existing laws already apply to potential AI biases. "There are very important discussions happening now about the need for new legal authorities to address AI,” Burrows said on a press call, as reported by Law360. “But in the meantime, I want to be absolutely clear that the civil rights laws already on the books govern how these new technologies are used in the meantime.”
So, you should be sure to review your recruiting and other workplace tools for possible bias before you use them and continue to do so periodically thereafter. Like hiring managers, AI algorithms do not intentionally screen out candidates based on a protected category, but the AI algorithm may unintentionally screen out a disproportionate number of qualified candidates in a protected category. This could happen, for example, if the screening is based on qualities of the employer’s top-performing employees and if these workers are primarily from a specific demographic group.
As another example, if your system automatically rejects candidates that live more than 20 miles from the worksite, you may be unintentionally limiting the ethnic and racial diversity of the candidates you consider, depending on the demographics of the area.
Although many technology vendors may claim that the tool they have is “bias-free,” you should take a close look at what biases the technology claims to eliminate. For example, it may be focused on eliminating race, sex, national origin, color, or religious bias, but not necessarily focused on eliminating disability bias. You should also review the vendor’s contract carefully (specifically the indemnification provisions) to determine whether your company will be liable for any disparate impact claims.
Additionally, employers that use third-party vendors to conduct background investigations have certain obligations under the Fair Credit Reporting Act (which is enforced by the Consumer Financial Protection Bureau). “Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” said CFPB Director Rohit Chopra in a statement. He added that the CFPB “will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”
Thus, now is a good time to consider conducting an independent audit of your AI and algorithm-based tools to help eliminate any bias.
3. Review the White House Blueprint for an AI Bill of Rights
Last year, the White House Office of Science and Technology Policy released its “Blueprint for an AI Bill of Rights,” a non-binding 73 page whitepaper intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems. This blueprint includes five key principles to help protect Americans in the age of artificial intelligence, and employers would be wise to consider them when developing their own policies and practices.
The blueprint is designed to support policies and practices to protect individuals’ rights in the development and use of automated systems. For businesses, however, this is a strong sign from the White House that it is taking artificial intelligence seriously. It is also an indication that future – and significant – legislation surrounding artificial intelligence will likely be proposed at the federal and state levels. Businesses should stay abreast of these developments to ensure that their practices are in compliance with applicable rules and regulations governing artificial intelligence. For more on this point, read our Insight, here, about the five key principles you should incorporate.
4. Monitor for New State and Local Laws
You should also note that states and localities are beginning to scrutinize the use of AI in the workplace. For example, a law that went into effect on January 1, 2023, in New York City (with enforcement delayed until July 5, 2023) requires employers to get a “bias audit” for all automated employment decision tools. This is an impartial evaluation by an independent auditor that tests, at minimum, the tool’s disparate impact upon individuals based on their race, ethnicity, and sex. This law also contains strict notice and disclosure requirements. You can read our Insight about this law here.
Additionally, there are currently proposed laws in California, Washington, D.C., and Colorado. You should anticipate that many other states and cities will adopt similar requirements for bias audits.
If you have questions about the best ways to maximize the value of AI in your workplace while reducing legal, ethical, and reputational risks, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney on our Artificial Intelligence Practice Group. We will continue to monitor further developments and provide updates on this and other workplace law issues, so make sure you are subscribed to Fisher Phillips’ Insight System to gather the most up-to-date information.