New documents obtained via a Freedom of Information Act request by the Project on Government Oversight (POGO) have revealed that Amazon met with and pitched the US Immigration and Customs Enforcement department on adopting its Rekognition facial recognition technology.
In a June 15 email, an Amazon sales representative thanked ICE for meeting with it at the offices of the McKinsey consulting firm in Redwood City, CA, according to The Daily Beast. “We are ready and willing to support the vital HSI mission,” the consultant wrote.
Now, to be clear, HSI — Homeland Security Investigations — is separate from ICE’s ERO (Enforcement and Removal) operation, which is the division in charge of many of the activities that have most angered many Americans, including enforcing the Trump Administration’s family detention plan. Nonetheless, there are serious concerns about Amazon licensing its “Rekognition” software to any law enforcement agency — concerns the company’s own employees raised six days after this email was sent. The ACLU published the results of an investigation in May that demonstrated how Amazon was aggressively marketing its products to law enforcement. The company has long been in the surveillance business indirectly — the Palantir surveillance system runs on an AWS backend. Getting into surveillance directly must have seemed like a natural step.
But while HSI and ERO may be different divisions of DHS, there’s a much more immediate, simple reason to oppose the deployment of these programs or their sale to law enforcement: They don’t work well. If you’re white, a program like Rekognition is up to 99 percent accurate. If you aren’t, accuracy craters. According to tests performed by the MIT Media Lab, facial recognition software solutions from IBM, Microsoft, and Face++ misidentified darker-skinned women as men 35 percent of the time. Men with darker skin tones were misgendered in 12 percent of cases, up to 7 percent with lighter-skinned women, and 1 percent of the time with lighter-skinned men. As I’ve written about before, human beings are far too likely to believe that computers are infallible to be handed software in which between 1 in 3 and 1 in 14 people are likely to be misidentified or tagged mistakenly.
While these tests didn’t include Rekognition, the ACLU tested Amazon’s solution in July by running the members of Congress through the Rekognition database. The test resulted in 28 false positives for crimes. People of color represent 20 percent of Congress but accounted for 40 percent of the false positives the Rekognition system kicked back.
It’s as crystal-clear a demonstration of how supposedly neutral algorithms can cause racist behavior as you’d imagine. Because facial recognition training data sets are overwhelmingly white and male (one popular set is more than 75 percent male and more than 80 percent white), the system only learns to read white, male faces. Because it can’t read faces that aren’t white and male, its error rates are vastly higher when applied to anyone else. Because that information isn’t disclosed or made apparent when law enforcement deploys these systems — and Rekognition is already being used by law enforcement across the country — you have a supposedly neutral algorithm making blatantly racist decisions by virtue of having been trained to recognize white faces well and black faces poorly. And while there’s absolutely no evidence that Amazon did this intentionally, tell that to someone who has been arrested because a law enforcement computer says they were at the scene of a crime they were, in actuality, nowhere near.
Separately from that, former ICE officials have told the Daily Beast there’s a good chance the system would be used by ERO, even though there’s supposedly a policy requiring special circumstances for ICE to seize people at “sensitive” locations like hospitals, churches, and schools. In reality, the number of seizures at such locations have gone up markedly in recent years. Amazon employees have protested the potential sale of Rekognition to law enforcement, both on human rights concerns and technical concerns related to the function of the product and its ability to perform the tasks the company claims it can in its marketing materials.
Inaccurate facial recognition software that correctly identifies white people but incorrectly accuses persons of color of crimes they didn’t commit is racist software. It honestly feels a bit odd to write it that way, but that will be the inexorable impact of putting any such product into use. People of lighter skin will be accurately identified as a person of interest (or non-interest) in crimes, while people of darker skin won’t be. The fact that Amazon was willing to start selling this product to law enforcement without proactively ensuring its accuracy says absolutely nothing good about the company’s priorities or its commitment to ensuring its products are not used to terrorize law-abiding citizens of any gender or ethnicity.
Until and unless Amazon (or any other vendor) can deploy a facial recognition technology with a 99 percent minimal accuracy rate when tested against individuals of all colors, shapes, sizes, genders, and physical description, including in difficult analysis scenarios, it has no business being commercially marketed as a tool to law enforcement. This is literally the type of tool that could be used to justify kicking in the door of a suspect as part of a no-knock raid and wind up getting an innocent person killed. The stakes are high. The requirements for deployment should be extremely high as well.
Now Read: An Ex-Google Engineer Is Founding a Religion to Worship AI. He’s Decades Too Late, Facial recognition study sheds new light on threat response and the ‘spidey sense, and Real-time emotion detection with Google Glass