By all appearances, Facebook is trying to clean up its act, and take a hard line against various platform abuses. A day after suspending&a mp;n bsp;some 200 suspicious apps, the company is showing off sizable moderation figures.
Just in the first quarter, Facebook says its disabled about 583 million fake accounts, most were disabled within minutes of registration. This is in addition to the millions of fake-account attempts Facebook says it prevents daily from ever registering with the social network.
During the quarter, Facebook also took down 837 million pieces of spam, nearly 100% of which it found and flagged before being reported by one of its more than 2.1 billion active users. In the same period, the tech titan also took down 21 million pieces of adult nudity and sexual activity, 96% of which Facebook’s technology found and flagged before it was reported.
Facebook estimates that out of every 10,000 pieces of content viewed via its network, seven to nine views were of content that violated its adult nudity and pornography standards.
For graphic violence, Facebook took down or applied warning labels to roughly 3.5 million pieces of violent content during the quarter -- 86% of which its technology identified.
Not quite ready to do victory laps, however, Facebook executives concede that its platform and moderation technology remain far from flawless.
For starters, Facebook believes that between 3% to 4% of the active accounts were “fake” during the period. Just to put that estimate into perspective, 4% of 2.1 billion is 84 million. That’s a lot of fake accounts.
However, Facebook can’t rely solely on its moderation technology to keep hate speech off its platform, according to Guy Rosen, vice president, product management.
Regarding hateful words shared by its many members, Rosen admits: “Our technology still doesn’t work that well, so it needs to be checked by our review teams.”
While Facebook removed 2.5 million pieces of hate speech in the first quarter of this year, only 38% of it was flagged by its automated system.
“It’s partly that technology, like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important,” Rosen notes in a new blog post.
For example, Facebook’s AI isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue, he noted.
Rosen also said that AI technology requires lots of training data before it can begin to recognize meaningful patterns of behavior. There are smart people still trying to circumvent Facebook’s controls.
These are all fair points, but unlikely to inspire much sympathy from users, advertisers and lawmakers.