Are Autonomous Vehicles Racist?
Insights
3.12.19
According to a recent study, people with darker skin are more likely to be hit by autonomous vehicles than people with lighter skin. Researchers from the Georgia Institute of Technology concluded that object detection systems of the type used in autonomous vehicles had uniformly poorer performance when detecting pedestrians with darker skin types.
The study used the Fitzpatrick skin type scale, which was introduced to measure a number of physical attributes (such as hair and eye color, in addition to a person’s likelihood to freckle, burn, or tan when exposed to UV light). The scale’s six categories were assigned to one of two groups: one for the lighter and one for the darker skin tones. Pedestrian detection in road scenes was tested for the two groups and showed a lower rate of detection for the dark-skinned group.
According to Vox, the study “should be taken with a grain of salt” because it has not been peer reviewed, and did not test pedestrian-detection models or training datasets actually being used by autonomous vehicle manufacturers. The manufacturers do not make their data available for study.
But this is not the first time image recognition systems have demonstrated higher accuracy for whites. For example, last year, Amazon’s facial recognition system was tested by the ACLU and incorrectly matched 28 of the 535 members of Congress to mugshots of arrestees. The ACLU reported that “Nearly 40 percent of the false matches were people of color, even though they make up only 20 percent of Congress.”
The Georgia Institute of Technology researchers also investigated potential causes of the discrepancy found in their study. They were able to eliminate occlusion by other objects or people and diverse lighting conditions as causes. They also evaluated the fact that there were three times as many light-skinned people in the dataset. They found that reducing the higher degree of representation (reweighting during machine learning) can improve performance respecting dark-skinned people.
This raises a further question: what led to the imbalance in the dataset in the first place?
Some have suggested that if the engineers creating the datasets are not diverse, they tend to select images most like themselves and fail to recognize disparate representation in datasets when it occurs. This unconscious bias might be corrected through increasing the number of women and minorities in STEM jobs and, in the nearer term, testing explicitly for bias.
These same principles apply to every workforce. Employers should keep the following in mind:
- Where the workforce is not diverse, there is a risk that decisions may be made on, or influenced by, unconscious bias. Plaintiffs’ lawyers frequently argue that seemingly neutral rules or practices can have a disparate impact on a protected group. You must not only strive to diversify your workforce, but be on guard against implicit bias.
- Raise managers’ awareness of implicit bias through surveys and tests, which HR consulting companies often provide.
- Offer diversity training for managers and employees, as it can further awareness of implicit bias and how it can affect workplace decisions.
- Establish objective criteria and unambiguous, consistent procedures for use when making employed-related decisions.
- After giving managers these tools, you should assess their performance and hold them accountable for hiring, performance evaluation, and other employment decisions.
Failing to follow these steps in the employment setting may not endanger life and limb, but it can certainly increase financial and personal risks.
Related People
-
- Michael R. Greco
- Regional Managing Partner
-
- Susan M. Schaecher
- Senior Counsel