
After more than a decade of controversy, Facebook will
halt its use of facial recognition technology.
“We’re shutting down the Face Recognition system on Facebook. People who’ve opted in will no longer be automatically recognized
in photos and videos and we will delete more than a billion people’s individual facial recognition templates,” Jerome Pesenti, vice president of artificial intelligence, said Tuesday in a
blog post.
He adds that the company needs to “weigh the positive use cases for facial recognition
against growing societal concerns, especially as regulators have yet to provide clear rules.”
Facebook first began rolling out its facial recognition system in late 2010, prompting a
nearly instantaneous backlash.
advertisement
advertisement
“Facial recognition technology will ultimately culminate in the ability to search for people using just a picture. And that will be the end of
privacy as we know it -- imagine, a world in which someone can simply take a photo of you on the street, in a crowd, or with a telephoto lens, and discover everything about you on the
internet,” PC Worldwrote in 2011.
Since then, controversy surrounding the
technology has only increased.
Six years ago, Illinois residents brought a class-action lawsuit against Facebook for allegedly violating a state biometrics privacy law. (The company eventually
agreed to settle that matter for $650 million.)
In 2018, a coalition of 14 advocacy groups including the Electronic Privacy Information Center, Center for Digital Democracy, Consumer
Federation of America and the Southern Poverty Law Center alleged in a
Federal Trade Commission complaint that Facebook “deceptively” enlisted users to build out its facial recognition database by asking them to tag friends in photos.
“This
unwanted, unnecessary, and dangerous identification of individuals undermines user privacy, ignores the explicit preferences of Facebook users, and is contrary to law in several states and many parts
of the world,” the groups wrote.
Last year, the city of Portland, Oregon passed an ordinance prohibiting private companies from using facial-recognition
technology in stores, parks, and other places of public accommodation. That measure also broadly prohibits the police from using facial-recognition technology.
Other cities -- including Boston
and San Francisco -- have also banned the police from using the technology.
Civil rights advocates are especially concerned that facial-recognition technology will effectively end people's
ability to appear outdoors -- at protests, political events, or other public spaces -- without revealing their identity to the government.
“The threat that facial recognition poses to
human society and basic liberty far outweighs any potential benefits,” Evan Greer, deputy director of digital rights group Fight for the Future, wrote in a 2019 column for BuzzFeed. “It’s on a very short list of technologies -- like nuclear and biological
weapons -- that are simply too dangerous to exist, and that we would have chosen not to develop had we had the foresight.”
Of course, Facebook isn't the only company to amass a facial
recognition database, but it's among the largest.
But even after Facebook deletes its facial templates, other companies can theoretically compile their own faceprint databases by scraping the
publicly available information on the social media site -- or any other sites that display people's names and photos.
Clearview AI has famously done so. The company, which reportedly sells its
faceprint data to police departments (and other agencies and private companies), compiled its faceprints by scraping billions of photos from Twitter, Facebook and other companies.
Facebook and
other social media companies, including YouTube, have said scraping violates
their terms of service, but whether web companies can legally prevent anyone from gathering publicly available data remains unclear.
It's worth noting that the company's move comes several
weeks after President Joe Biden nominated privacy expert Alvaro Bedoya to the Federal Trade Commission.
Bedoya, the founding director of the Center on Privacy & Technology at Georgetown
Law, is best known for proposing curbs on the use of facial-recognition technology.
Under his leadership, the Center on Privacy & Technology published the influential 2016 report
“Perpetual Line Up,” which noted that the use of facial-recognition technology by law enforcement will disproportionately affect African Americans, and that the technology may be least
accurate for African Americans.