The Variety of ‘Facial Recognition’ Technologies Necessitates Nuanced Legislation vox.com

John Oliver:

This technology raises troubling philosophical questions about personal freedom, and, right now, there are also some very immediate practical issues. Even though it is currently being used, this technology is still very much a work in progress, and its error rate is particularly high when it comes to matching faces in real time. In fact, in the U.K., when human rights researchers watched police put one such system to the test, they found that only eight out of 42 matches were ‘verifiably correct’ — and that’s even before we get into the fact that these systems can have some worrying blind spots, as one researcher found out when testing numerous algorithms, including Amazon’s own Rekognition system:

At first glance, MIT researcher Joy Buolamwini says that the overall accuracy rate was high, even though all companies better detected men’s faces than women’s. But the error rate grew as she dug deeper.

“Lighter male faces were the easiest to guess the gender on, and darker female faces were the hardest.”

One system couldn’t even detect whether she had a face. The others misidentified her gender. White guy? No problem.

Yeah: “white guy? No problem” which, yes, is the unofficial motto of history, but it’s not like what we needed right now was to find a way for computers to exacerbate the problem. And it gets worse. In one test, Amazon’s system even failed on the face of Oprah Winfrey, someone so recognizable her magazine only had to type the first letter of her name and your brain autocompleted the rest.

Oliver covers a broad scope of different things that fit under the umbrella definition of “facial recognition” — everything from Face ID to police databases and Clearview AI.

Today, the RCMP and Clearview suspended their contract; the RCMP was, apparently, Clearview’s last remaining client in Canada.

Such a wide range of technologies raise complex questions about their regulation. Sweeping bans may prohibit the use of something like Face ID or Windows Hello, but even restricting use based on consent would make it difficult to build something like the People library built into Photos. Here’s how Apple describes it:

Face recognition and scene and object detection are done completely on your device rather than in the cloud. So Apple doesn’t know what’s in your photos. And apps can access your photos only with your permission.

Apple even put together a lengthy white paper (PDF) that, in part, describes how iOS and MacOS keep various features in Photos private to the user. However, in this case, the question is not about the privacy of one’s own data, but whether it is fair for someone to use facial recognition privately. It is a question of agency. Is it fair for anyone to have their face used, without their permission, to automatically associate pictures of themselves? Perhaps it is, but is it then fair to do so more publicly, as Facebook does? What is a comfortable line?

I don’t mean that as a rhetorical question. As Oliver often says, “the answer to the question of ‘where do we draw the line?’ is somewhere”, and I think there is a “somewhere” in the case of facial recognition. But the legislation to define it will need to be very nuanced.

Rebecca Heilweil, Vox:

So it seems that as facial recognition systems become more ambitious — as their databases become larger and their algorithms are tasked with more difficult jobs — they become more problematic. Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told Recode that facial recognition needs to be evaluated on a “sliding scale of harm.”

When the technology is used in your phone, it spends most of its time in your pocket, not scanning through public spaces. “A Ring camera, on the other hand, isn’t deployed just for the purpose of looking at your face,” Guariglia said. “If facial recognition was enabled, that’d be looking at the faces of every pedestrian who walked by and could be identifying them.”

[…]

A single law regulating facial recognition technology might not be enough. Researchers from the Algorithmic Justice League, an organization that focuses on equitable artificial intelligence, have called for a more comprehensive approach. They argue that the technology should be regulated and controlled by a federal office. In a May proposal, the researchers outlined how the Food and Drug Administration could serve as a model for a new agency that would be able to adapt to a wide range of government, corporate, and private uses of the technology. This could provide a regulatory framework to protect consumers from what they buy, including devices that come with facial recognition.

This is such a complex field of technology that it will take a while to establish ground rules and expectations. Something like Clearview AI’s system should not be allowed; it is a heinous abuse of publicly-visible imagery. Real-time recognition is also extremely creepy and I believe should also be prohibited.

There are further complications: though the U.S. may be attempting to sort out its comfort level, those boundaries have been breached elsewhere.