The Future Is Now: Robots And Artificial Intelligence In The Workplace
While it may be some time before we commute to work in flying cars or seek a transfer to our company’s lunar outpost, another concept once thought outside the realm of modern reality is now increasingly ordinary in the contemporary workplace: working side-by-side with robots and machines capable of artificial intelligence. This article provides an overview of some of the ways in which these once-futuristic technologies are being integrated in today’s work environment, and offers best practice suggestions for human resources professionals and in-house counsel adapting to these developments.
Artificial Intelligence At Work
We have reached the point of “minimum viability” when it comes to artificial intelligence (AI) – we can now count on the reliable use of AI products to perform meaningful work. Long past are the days when AI was little more than a novelty (remember asking iPhone’s Siri whether it was raining outside?). The technology to integrate AI into necessary functions is now available, the data needed to power AI has been accumulated, and investors are pouring money into AI systems to make them a worthwhile part of everyday life.
Having reached minimum viability, we now stand on the cusp of revolution. The last such revolution took place over the past decade as business moved to a mobile platform; if your organization cannot conduct business over smartphone or mobile device in today’s world, you are practically a dinosaur. The next such revolution will involve AI. It’s not a stretch to say if you are not meaningfully integrating AI into your business within the next five years, you could become similarly obsolete.
General Principles Of Artificial Intelligence
For those unfamiliar with AI, or unaware of the ways in which it could impact the modern workplace, it’s helpful to define some terms and concepts. AI refers to any algorithm designed to respond to particular inputs in a way that maximizes the likelihood of a “success condition.” The old approach to AI required a programmer to identify certain data points that would best guide the algorithm toward whatever had been defined as “success.” For example, an old spam filter would search emails for a select list of words or phrases (such as “free” or “limited time”) or even a combination of ordinarily innocuous words (such as “prince,” “Nigeria,” and “transfer”) to determine whether an email is unwanted spam.
Basic systems were limited in the number of data points they could consider; they were not flexible. Precision was similarly limited because they were bound by the amount of unique data that could be provided for training, and the programmer only had finite processing power available to train the algorithm. Accuracy had to be sacrificed, to some extent, to deliver a level of practical efficiency.
But with current advances in computing power, programmers are able to take a “brute force” approach and consider virtually all data points available. In the context of a primitive spam filter, an algorithm may be capable of assigning a predictive value to every word it encounters, as opposed to a few select words the programmer determined to be useful. In order for AI to be valuable, though, the algorithm has to “learn” and improve with each encounter. And increased processing power and near-universal accessibility to data has improved AI algorithms so significantly they can now “learn” and offer a better result.
Human programmers might set starting values for evaluating certain data points. However, what makes the algorithm “intelligent” is if an AI mechanism can change the values when the algorithm is exposed to failure conditions and success conditions as they relate to the highlighted data points. To refer again to our primitive spam filter, a programmer might initially code an algorithm to consider the term “prince” to identify an email that should be flagged and blocked. However, using modern AI, this starting point could change after the algorithm is exposed to thousands of non-spam emails containing the term “prince” and only two emails containing the term “prince” that are spam. Thus, a human programmer’s initial presumption might be supplanted by machine learning. The more data the algorithm can learn from, the more refined the result. But this makes human interaction, outside of defining success and failure conditions, less significant.
Reductions In Force Associated With AI And Robotics Integration
When human interaction becomes less significant, the number of jobs immediately decreases. In the short term, the AI revolution will lead to a loss of jobs. For example, a recent report indicated Goldman Sachs laid off 600 traders and replaced them with 200 computer engineers to support new and improved automated trading programs.
As businesses move forward with the implementation of transformative technologies like AI and robotics, unions and individuals will undoubtedly pursue legal challenges to save or protect as many jobs as possible. If you desire smooth implementation of any new technology in the workplace, you should examine all potential legal challenges and develop a strategic implementation plan. Most importantly, you should strongly encourage your organization to invite human resources and in-house counsel to the table when discussing how to integrate robotic and AI automation into the workplace. You can assist with developing legally defensible implementation plans to minimize associated transition costs.
Before developing an actual reduction in force (RIF) plan, you should first consider a voluntary termination plan that complies with the Age Discrimination in Employment Act. An offer of monetary payment in exchange for a full release of claims should be presented to the employee in a measured and thoughtful manner to minimize claims of fraud, duress, or coercion.
Once these are secured, you can determine how to meet any applicable statutory and contractual obligations triggered by the anticipated RIF. The Worker Adjustment and Retraining Notification Act (WARN) requires certain larger employers (typically those with 100 or more full-time employees) to follow certain statutorily prescribed procedures in the event of a plant closure or “mass layoff” (a reduction of 500 employees or at least 33 percent of employees and at least 50 employees from a single site of employment during any 30-day period). This includes serving advanced written notice to the union representative, affected workers, and the appropriate state dislocated worker rapid response agency at least 60 calendar days in advance of the first individual termination. Many state laws impose additional WARN-like obligations.
If any reduction involves unionized workers, you will have additional obligations. You will most likely be obligated to bargain regarding the matter, as the NLRB views implementation of new technologies that impact union jobs as a mandatory subject of bargaining. Consequently, you should provide ample notice of the anticipated operational change and a meaningful opportunity for the union to negotiate over the effects of the new technology on the workforce – and perhaps even the initial decision to implement the change itself. Make sure to memorialize all communications with the union to build a record that you met the good faith bargaining obligation.
Some may resist the AI revolution, believing robots will take all of the jobs at your organization. Be ready to assure your workforce that RIFs are not the natural conclusion to all technological advances. Introducing AI to the workplace and putting certain jobs on “auto-pilot” does not always replace jobs; it could simply augment them. For example, AI could help your organization develop “smart” systems to automate repetitive manual operations (such as data entry), thereby freeing junior workers to handle higher-use, strategic work.
The Pitfalls Of Using Artificial Intelligence For Hiring
Your organization may soon take advantage of AI by using it to screen through a seemingly unmanageable pool of employee candidates. A well-developed AI system could effectively eliminate candidates from consideration without any human involvement – but it could also result in claims of employment discrimination. While no competent employer would intentionally develop an AI program to illegally discriminate against some segment of job applicants, system limitations could lead to inadvertent employment law dangers.
Last year, the White House published a 22-page report detailing occurrences of unintentional discrimination resulting from “Big Data”-enabled screening processes. The report explained how a poorly selected data set used for machine learning could result in a system that inherits bias from human decisions, considers factors that may disproportionately correlate with applicants’ protected status, or disproportionately represents certain populations over others. It is thus important to ensure whatever AI your organization chooses to screen initial candidates is provided proper data.
Just as your company spam filter might do well to screen unwanted emails at work but is not well-trained to do the same job on your personal email, a personnel record filter originally established to find candidates for an engineer position in Silicon Valley may be ill-suited for use when hiring for a manufacturing position in Nashville. The candidate groups likely have different education levels, different use of words in their resumes, different patterns of employment, and other non-protected categories that can result in poor hiring decisions. Further, any voice-recognition programs you develop to initially receive and screen oral interviews might not be attuned to sort through speech impediments, native accents, or nervous-sounding answers caused by mental impairments.
In some ways, the old ways are still better—personal screening of applications ensures properly calibrated treatment. At the very least, a decision-maker within your organization can make a statement about how the screening process was fair and unbiased. Still better than a Big-Data-sourced screening service not specifically calibrated to the employer’s needs is an old-fashioned probabilistic filter designed around parameters that an HR manager can easily understand or even request. This way, if an employment decision is ever scrutinized for alleged discrimination, your screening process can be easily described and defended.
Safety Issues Surrounding Robots And AI
The use of robotics and AI in the workplace introduces some additional novel questions for human resources professionals and in-house counsel. One of the more significant questions: what can be done to improve employee safety when integrating new technologies?
Generally, employers are required to review all working environments to identify risks of employee exposure to occupational hazards. The Occupational Safety and Health Administration (OSHA) refers to this procedure as “hazard assessment.” Once you have identified a risk of employee exposure, you must implement some sort of hazard control; elimination, engineering control, administrative controls, or personal protective equipment, listed in the order of general preference.
If you implement a sophisticated robot with complicated programming that only a small handful of engineers in the world understand, it may be difficult to determine whether a hazard exists at all. It may be even more difficult to identify controls that adequately address the hazard. Before introducing robotic technology to your workplace, you should ensure your organization has a basic understanding of how the robot’s capabilities are constrained; namely, what data it considers when determining its next action, what actions it is capable of, and what failsafe conditions will trigger protective actions such as a shutdown.
More uncertainty could arise in manufacturing or production settings, where employers must determine an acceptable response to an incident or accident. Ordinarily, when robots are not involved, an employer’s script is well established: (1) identify the root cause of the incident, (2) determine what adjustments or repairs are necessary in practice or procedure, (3) issue discipline to employees if appropriate, and (4) retrain employees as necessary.
But when the root cause involves the logic of a robot, what can you do? You can’t discipline a robot. Unless you have the proper resources, you can’t retrain a robot. And without permission from the manufacturer, it may be unlawful or unsafe to attempt to adjust a robot’s learning. It might take highly specialized engineering skills to understand why the robot did not behave as expected. And completely replacing a manufacturing line may be more of a nuclear option than a realistic solution.
When incorporating newer robotics into manufacturing lines, you should engage outside counsel trained to assist with workplace safety as early in the process as possible. They can help ensure employee safety with appropriate contingencies and develop a defensible record should an incident occur despite your best efforts.
Coupling Wearable Technology And AI
Employers are starting to recognize the value of wearable technology at work. Wearable technology collects various types of data from an individual, allowing the data to be catalogued and used for various purposes. When coupled with AI, the potential is limitless. For example, an AI system could identify systematic inefficiencies by tracking worker movements, and then use advanced algorithms to suggest improved workflow in many workplace environments.
Privacy issues abound, however, when tracking the location of employees using this type of technology. With respect to employer-owned property and vehicles, courts in several states continue to honor employers’ right to gather information when the individual employee has no reasonable expectation of privacy. However, a few states have regulated electronic tracking in general – some outlawing the practice entirely – while others provide exceptions for certain persons or require you to secure worker consent. You should determine the appropriate legal limitations before collecting any such information.
Other employers use information gathered from wearable technology to improve wellness support for employees, accumulating data to learn how to most effectively allocate resources for wellness programs. Obviously, individuals expect the privacy of this information to be honored. Legal protections have expanded to limit employers from overreaching in collecting, storing, and using personal data. Therefore, you must be careful to navigate state privacy and electronic surveillance laws, as well as potential HIPAA concerns, when implementing this technology.
In most situations, consent is key. When collecting data from wearable technologies for a company wellness program, you must notify participating employees what personal information is being collected, how it will be used, and to whom it will be disclosed. It is also important to inform employees that personal data will be collected by the wearable tech at times they are not working. This could, of course, result in employees removing the technology outside work, which might defeat the overall objective.
Data Security Concerns
Finally, any time you gather data, you are susceptible to cybersecurity threats and data breaches. This area of the law continues to experience rapid development, requiring near-constant-attention of human resource professionals and in-house counsel to attend to privacy and data security issues. You should examine whether your company has reasonable and appropriate systems in place to prevent unauthorized access to personal data. It would also be wise to develop regular audit protocols for these systems to re-evaluate its cybersecurity procedures on a regular basis.
This article touches but a few ways in which AI and robotics could intersect with your organization’s mission in the near future. Be sure to demand a seat at the decision-making table and offer your human resources or legal expertise when your organization takes the next step in integrating these modern technologies into the workplace.