In fact, as sources tell Recode, Facebook is quietly looking for a new leader to cure its growing content issues. The company is reportedly scouring the technology and media industries for the right executive, but the search is proving more than difficult than expected.
By its own reckoning, Facebook is getting better at spotting spam, bogus accounts, fake news, con jobs, and other types of misinformation.
Of late, “We’ve made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity … without assessing the content itself,” Shabnam Shaik, a technical program manager on Facebook’s Protect and Care Team, noted in a new blog post.
For Facebook, red flags include repeated posting of the same content, and an increase in messages sent.
In tests, this evolving strategy is already showing results, said Shaik. “In France, for example, these improvements have enabled us to take action against over 30,000 fake accounts,” he calculates.
Going forward, Shaik said he and his team were focused on sidelining the biggest and more prolific offenders. “Our priority … is to remove the accounts with the largest footprint, with a high amount of activity and a broad reach,” he noted.
In partnership with top third-party fact-checking organizations, Facebook recently launched a full-frontal attack on “fake news."
Not everyone has applauded Facebook’s approach to fighting fake news, however.
Among other critics, Mike Caulfield, director of blended and networked learning at Washington State University Vancouver, recently took to Medium to air his grievances.
Among other gripes, Caulfield said he thought Facebook’s proposed process takes too long, it deals only with “surface issues,” “it targets fake news but not slanted claims,” and it doesn’t effectively make use of its own powerful network.