Facebook users have expressed their disappointment with the company's failure to properly vet content featured on its platform, the social giant said Thursday.
Users are linking to Web pages “containing little substantive content and that [are] covered in disruptive, shocking or malicious ads,” in the words of Facebook engineers Jiun-Ren Lin and Shengbo Guo.
In response, the company is rolling out an update so users see fewer posts and ads in News Feed that link to these low-quality Web pages. “Similar to the work we’re already doing to stop misinformation, this update will help reduce the economic incentives of financially-motivated spammers,” Lin and Guo note in a new blog post.
For Facebook, the update is part of a broader war on bad actors and their efforts to flood the platform with spam, phishing expeditions, “false news,” and other unwanted fare.
To that end, Facebook recently released a broad plan to stop the spread of misinformation on its platform.
Facebook has actually had a policy in place designed to prevent advertisers with low-quality Web pages from advertising on its platform since last year. Now, however, the tech titan is promising to ramp up enforcement on ads, as well as take into account organic posts in News Feed.
With the update, Lin and Guo said they and their colleagues reviewed “hundreds of thousands” of Web pages linked to from Facebook to identify bad Web pages.
The team said it then used artificial intelligence to understand whether new Web pages shared on Facebook have similar characteristics.
“So, if we determine a post might link to these types of low-quality Web pages, it may show up lower in people’s feeds and may not be eligible to be an ad,” they explain.
It was all included in a white paper, in which Facebook explained: “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.”