Facebook Slammed For Spanish-Language Monitoring Gap, Adds Penalties For Groups That Spread Harmful Content

Facebook’s content dilemmas just keep on coming.

This week, as the platform scrambles to address mounting waves of dangerous hate-group messaging and vaccination misinformation within the “Groups” that are now the centerpiece of its model, it is also being excoriated for allowing the spread of “extreme disinformation” to Spanish-speaking communities in the U.S.

On Wednesday, a coalition of racial justice and internet accountability organizations joined members of Congress to demand action from Facebook on what they term its “Spanish-language content moderation gap.”

“To address the rampant Spanish-language disinformation in the U.S., we call on Facebook to publicly identify an executive-level manager to oversee U.S. Spanish-language content moderation policy and enforcement, to publicly explain the translation process of the algorithm and content moderation, and to share the training materials used to review whether content violates existing policy,” the organizations said in a statement. 

Participating groups, which include Center for American Progress, Free Press, National Hispanic Media Coalition, and the Real Facebook Oversight Board, cite examples ranging from translation issues — failing to account for slang, dialects and context — to poor fact-checking of Spanish-language news sites and extensive misinformation targeting Latinos. The problem has contributed to harassment and even loss of life among members of the Latinx community, they assert. 

Facebook misinformation targeting Hispanics was rampant during the runoffs for two Georgia senate seats in Georgia in January, despite a promise from Facebook CEO Mark Zuckerberg to address the issues, the coalition says. 

The coalition has launched a campaign dubbed #YaBastaFacebook! (“Enough already, Facebook!”), and released a “Spanish Language Accountability Action Plan.” 

Rep. Tony Cardenas (D-CA) vowed during yesterday’s presentation to ask Zuckerberg “very direct questions” about Facebook’s commitment to "protecting Spanish-speaking users” when he appears before the House Energy and Commerce Committee on March 25. “We will expect real and specific answers,” he added. 

Meanwhile, Facebook’s own internal research recently found that 70% of its  top 100 most active U.S. Civic Groups are “considered non-recommendable for issues such as hate, misinfo, bullying and harassment,” according to an internal document quoted by The Wall Street Journal.

Facebook concluded that top Groups act as megaphones for “hate bait”: racially and politically charged content meant to elicit calls for violence, WSJ reports.

And although Facebook supposedly banned outright false and misleading messages about COVID-19 starting back in December, and expanded the list of false claims it removes as of February, such content continues to mushroom.

The Associated Press reported having recently identified “more than a dozen Facebook pages and Instagram accounts, collectively boasting millions of followers, that have made false claims about the COVID-19 vaccine or discouraged people from taking it.” Some of these pages have existed for years, AP said.

“Facebook and Instagram still do not remove the vast majority of posts reported to them for containing dangerous misinformation about vaccines,” Imran Ahmed, CEO of the nonprofit Center for Countering Digital Hate, told AP. “The main superspreaders of anti-vaccine lies all still have a presence on Instagram or Facebook, despite promises to remove them.”

In addition, Facebook’s own research showed that a small percentage of users are generating large volumes of gray-area, “vaccine hesitancy” content on the platform, according to The Washington Post.

As for labeling, “the evidence suggests that the way Facebook applies labels to misinformation posts has minimal impact,” Ahmed reported.

In one of its latest responses to these crises, Facebook CEO Mark Zuckerberg this week announced that the company is launching a global campaign with information and tools that aim to “bring 50 million people a step closer to getting COVID-19 vaccines.”

Facebook is also adding labels to vaccine discussions that offer “credible information about the safety of COVID-19 vaccines from the World Health Organization.”

On the Groups front, on Wednesday, Facebook VP of Engineering Tom Alison announced new steps to “make it harder for certain groups to operate or be discovered, whether they’re public or private,” and new penalties for groups and group members that break the platform’s rules.

For example, in addition to existing controls on “organizations and movements that have demonstrated significant risks to public safety” (Facebook has been heavily criticized for various Groups having been allowed to play significant roles in organizing the violent Jan. 6 attack on the U.S. Capitol), Facebook will “now limit the spread of these groups by removing them from recommendations, restricting them from search, and soon reducing their content in News Feed,” Alison wrote.

He added: “We also remove these groups when they discuss potential violence, even if they use veiled language and symbols. For example, we removed 790 groups linked to QAnon under this policy.”

Facebook is also adding various warnings on Groups that allow violations, such as one (above) urging users to review the Group before joining it, and blocking members with repeated violations, at least "for a period of time."

In what is no doubt in part another nod to the vaccine misinformation crisis, Facebook is also ceasing to show “health groups” in recommendations, although people can still invite friends to health groups or search for them.

Next story loading loading..