Skip to main content

Amazon needs to come clean about racial bias in its algorithms

Amazon needs to come clean about racial bias in its algorithms

/

We shouldn’t be using unaudited facial recognition systems on public streets

Share this story

Facial Recognition
Illustration by James Bareham / The Verge

Yesterday, Amazon’s quiet Rekognition program became very public, as new documents obtained by the ACLU of Northern California showed the system partnering with the city of Orlando and police camera vendors like Motorola Solutions for an aggressive new real-time facial recognition service. Amazon insists that the service is a simple object-recognition tool and will only be used for legal purposes. But even if we take the company at its word, the project raises serious concerns, particularly around racial bias.

Facial recognition systems have long struggled with higher error rates for women and people of color — error rates that can translate directly into more stops and arrests for marginalized groups. And while some companies have responded with public bias testing, Amazon hasn’t shared any data on the issue, if it’s collected data at all. At the same time, it’s already deploying its software in cities across the US, its growth driven by one of the largest cloud infrastructures in the world. For anyone worried about algorithmic bias, that’s a scary thought.

“It doesn’t make communities safer.”

For the ACLU-NC’s Matt Cagle, who worked on yesterday’s report, the possibility for bias is one of the system’s biggest problems. “We have been shocked at Amazon’s apparent failure to understand the implications of its own product on real people,” Cagle says. “Face recognition is a biased technology. It doesn’t make communities safer. It just powers even greater discriminatory surveillance and policing.”

The most concrete concern is false identifications. Police typically use facial recognition to look for specific suspects, comparing suspect photos against camera feeds or photo arrays. But white subjects are consistently less likely to generate false matches than black subjects, a bias that’s been found across a number of algorithms. In the most basic terms, that means facial recognition systems pose an added threat of wrongful accusation and arrest for non-white people. The bias seems to come from the data used to train the algorithm, which often skews white and male. It’s possible to solve that problem, but there’s no public evidence that Amazon is working on the issue.

So far, Rekognition isn’t sharing data, and that’s not a good sign. The project has extensive developer documentation on everything from swimsuit detection to celebrity privacy requests, but there’s nothing on the possibility of racial bias. After the ACLU’s report broke yesterday, I asked Amazon directly if the company has any data on bias testing for Rekognition, but so far, nothing has turned up.

It’s a glaring oversight, in part because public testing is relatively common among other facial recognition companies. For more than a year now, dozens of companies have been measuring their systems in public as part of a government project called the Facial Recognition Vendor Test (FRVT). Run by the National Institute of Standards and Technology, the FRVT is one of the most systematic studies of algorithmic bias we have. Vendors submit algorithms, and NIST runs them through a set of controlled tests, reporting the results with as little spin as possible.

The resulting report can show you, among other things, how racial and gender bias plays out across the error rates for 60 different algorithms. It looks like this:

A chart of error rates by race and gender from the Facial Recognition Vendor Test.
A chart of error rates by race and gender from the Facial Recognition Vendor Test.

This is basically what bias looks like. The higher the line, the higher the error rate. So anywhere the red line is higher than the blue line, you’re seeing racial bias in action. Wherever the full line is higher than the dotted line, you’re seeing gender bias. (Technically the X and Y axes are rates for false positives and false negatives, but essentially, higher is worse.) Looking through the chart, you can see that bias is an industry-wide problem, but it’s also a solvable one. The best algorithms from NEC and Tongyi Trans show hardly any gap between the lines, presumably because the companies kept working through the training dataset until they fixed the problem.

The other important thing about this chart: Amazon isn’t on it. The FRVT is a strictly voluntary process, geared toward federal contractors trying to make a good impression, so it’s not entirely surprising that Amazon is not on the list. Still, it’s worth considering why not. You could plead trade secrets, but with 60 algorithms already ranked, it’s hard to argue there’s much of a penalty to participating in this kind of testing. But without some outside power forcing Amazon into it, there’s just no reason for it to sign on.

We often talk about algorithmic bias as if it’s evil wizardry, but in these terms, the problem is straightforward. Amazon and companies like it need to make bias reviews public before systems like these are rolled out. There are lots of ways to do that, whether it’s voluntary industry participation like FRVT or state-level transparency laws, which are becoming popular for criminal justice algorithms. It could be strict regulation or a light-touch norm. But by the time an algorithm like this is being used in public, we should have some sense of how well it works and who loses out.

Amazon’s massive cloud infrastructure makes it a daunting competitor in the facial recognition industry, and its real-time pilot shows it’s already starting to outpace its more transparent competitors. If you’re worried about biased algorithms slipping out into the wild, that’s a troubling thought. We know bias is possible, even likely, in these kinds of systems. The question is: what Amazon will do about it?