7 Key Considerations When Using Artificial Intelligence In Recruitment And Hiring
In the new age of remote work and social distancing, more and more employers are showing an interest in artificial intelligence (AI) when it comes to recruiting and hiring new talent. This includes using AI to automate the sourcing of potential candidates and screen candidates from an existing candidate pool, and utilizing video interviewing tools that can measure a candidate’s strengths based on factors such as facial expression, speech patterns, body language, and vocal tone. Such tools are crafted to filter out and hire candidates that meet certain job-related criteria.
By applying AI in pre-employment assessments and interviews, employers are able to streamline their recruiting processes and screen though a seemingly unmanageable pool of candidates while maintaining social distancing practices and fostering a safe work environment. This practice, however, needs to be carefully managed to eliminate legal concerns. This article reviews the risks and some recent developments at the state level before concluding with seven key considerations you should take into account when implementing AI-based recruiting and hiring systems.
What Are The Risks?
While there are certain benefits to using AI in hiring and recruitment, there are various risks employers should be aware of when considering this technology. For instance, there are privacy concerns which vary depending on the technology employed and the data collected. While legal protections have expanded to limit employers from overreaching in collecting, storing, and using personal data, you must be careful to navigate state privacy and electronic surveillance laws, as well as potential HIPAA concerns, when implementing this technology.
There is also a risk of potential bias and discrimination. While no reasonable employer would intentionally use an AI program to illegally discriminate against a segment of job applicants, system limitations could lead to inadvertent employment law dangers. AI software is only as good as the data and algorithms that it uses, and data sets can contain implicit racial, gender, or ideological biases, which inherently make the AI system unreliable.
One of the concerns with using AI is that it is often crafted based on the resumes and backgrounds of job seekers who were successfully hired. Therefore, if a company has a history of only hiring a certain type of individual – i.e., white males or younger individuals – the AI tool may prioritize candidates with similar profiles to the current employees in the company. This could obviously put women, minorities, or older individuals at a disadvantage. Moreover, resume scanning tools that evaluate an applicant’s past experience could discriminate against women who are returning to the workforce after some time. Thus, there is a justifiable concern that the AI could disadvantage certain groups of people who do not fit pre-established criteria underpinning the algorithms, which are responsible for deciding who will be the most successful performer for the job in question.
In addition, AI tools that evaluate candidates based on their word choice or expressions could unlawfully discriminate against applicants. For instance, voice recognition programs utilized to screen oral interviews might not be attuned to sort through speech impediments, native accents, or nervous-sounding answers caused by mental impairments.
What Is Being Done To Address These Risks At The State Level?
While using AI in recruitment is not yet regulated on a federal level, there are several states which have enacted or proposed legislation regulating AI in employment. Illinois is the first state to regulate an employer’s use of AI in the hiring process. The Artificial Intelligence Video Interview Act, which has been in effect since January 1, 2020, requires organizations hiring for jobs “based in” Illinois that use “artificial intelligence analysis” of video interviews to comply with certain requirements. These requirements include:
- Notifying the applicant that AI may be used to analyze the video;
- Providing the applicant with information about how the AI works and evaluates general characteristics;
- Obtaining consent from the applicant to be evaluated using AI;
- Limiting the distribution and sharing of the video to only those persons whose expertise is necessary to evaluate the applicant; and
- Destroying the applicant’s video within 30 days upon request by the applicant.
While this statute does not provide for a private right of action or damages, it is certainly something which employers hiring for jobs based in Illinois should be aware of. In addition to Illinois, Maryland has also recently enacted a statute, which takes effect on October 1, 2020, prohibiting the use of facial recognition services without an applicant’s consent. Although Illinois and Maryland are leading the charge in this area, other states such as New York and California have proposed legislation governing the use of AI software in employment decisions. You should make sure to monitor these developments in the jurisdictions in which you do business.
What Should You Do? A 7-Step Plan
Employers interested in using AI technology for recruitment and hiring should proceed with caution. To minimize exposure of liability, you should consider the following seven steps:
- Make sure there are reasonable and appropriate systems in place to prevent the unauthorized access to personal data, and develop regular audit protocols for these systems to re-evaluate cybersecurity procedures on a regular basis.
- Be transparent with candidates if AI is going to be used in the recruitment and hiring process, including letting candidates know from the outset exactly how the AI will be used. Make sure to also obtain the candidate’s express written consent.
- Ensure that the AI does not present any discriminatory barriers to hiring. This includes working with thirty-party vendors providing the AI technology to understand the algorithm, auditing the system before it is deployed, and developing internal processes to assess and remediate any biases that may develop over the course of implementing the tool.
- On a related note, make sure to provide accommodations to candidates who are unwilling or unable to use AI during the recruitment and hiring process.
- Limit the distribution and sharing of any recordings to only those whose review is necessary to evaluate potential applicants, and keep a record of who has access to each recording to demonstrate reasonableness.
- Consult existing state laws and continue to monitor for developing legislation to ensure compliance with applicable law.
- Finally, you should seek advice from counsel before implementing a program based on the use of AI software.
As AI technology in recruitment and hiring continues to develop, it may provide a viable solution for many employers to continue to maintain social distancing measures and scale their search processes. The use of these technologies presents both promise as well as potential privacy and discrimination concerns. To maximize the benefit from using AI technology in hiring and recruitment, you should take steps to ensure that you have a strong understanding of the technology employed and data collected, along with how it is maintained and secured. As noted above, it is critical that you be transparent with candidates if they intend to use such technology when evaluating them. You should also train your management-level employees on how to use such data in order to reduce any resulting risks.
A comprehensive understanding of these issues, coupled with an appropriate disclosure and notification to candidates, will help to maximize potential benefits and reduce legal risks.
For more information, contact the author here.