Skip to main content

Study Finds Racial Bias In Facial Recognition Software Used By Police

 

police-lights-shutterstock Earlier this week, the Centre of Privacy and Technology at Georgetown Law released a study  examining facial recognition technology used by police departments across the country, which is unregulated. While the report has a total of 11 “key findings,” one in particular is getting the most attention: “Police face recognition will disproportionately affect African Americans.” While this doesn’t necessarily suggest a conscious racial bias by the developers of the software, there are a number of factors at work that may lead facial recognition programs being flawed in identifying black men and women.

There have long been studies about “cross-racial identification” being less reliable than someone distinguishing facial features among someone of their own ethnic group. Facial recognition systems are still designed by humans, after all, so the “training” of those systems can be flawed. The final match after a facial recognition system turns up possible matches is left to a human operator much of the time, so even if the software is working the way it should, there’s plenty of room for human bias. Two different facial recognition companies interviewed for the study admitted that their software had not been tested for racial bias.

The most well-known previous study on this topic, co-authored by an FBI expert, concluded that “The female, Black, and younger cohorts are more difficult to recognize for all matchers used in this study (commer- cial, non-trainable, and trainable).” A 2011 study found that facial recognition systems were most accurate on populations that reflected where they were developed, like caucasians for programs developed in Western Europe and East Asians for those developed in East Asia. From the new study, citing past research:

All three of the algorithms were 5 to 10% less accurate on African Americans than Caucasians. To be more precise, African Americans were less likely to be successfully identi ed—i.e., more likely to be falsely rejected—than other demographic groups. A similar decline surfaced for females as compared to males and younger subjects as compared to older subjects.

In one instance, a commercial algorithm failed to identify Caucasian subjects 11% of the time but did so 19% of the time when the subject was African American—a nearly twofold increase in failures. To put this in more concrete terms, if the perpetrator of the crime were African American, the algorithm would be almost twice as likely to miss the perpetrator entirely, causing the police to lose out on a valuable lead.

Facial recognition systems being “trained” using a racially homogeneous sample of photographs is clearly a problem when it comes to working accurately. Other factors cited as making facial recognition more difficult include variations in makeup on women and darker skin tones not working well with programs that use color contrast as to help read facial features. The study also mentions that, obviously, intentional bias could be a factor if a programmer really wanted it to be.

The study recommends that “The FBI should test its face recognition system for accuracy and racially biased error rates, and make the results public.” This would be part of a larger process involving strict policies for facial recognition and more widespread testing in general.

[Photo: Shutterstock]

Tags:

Follow Law&Crime:

David Bixenspan is a writer, editor, and podcaster based in New York.