
The Meta Safety Advisory Council has written a letter to
the tech giant listing its concerns over the company's decision to make major policy changes -- including the company's decision to end fact-checking across its family of social media apps --
and what Meta should do to prioritize the safety of its 3.5 billion active users across the globe.
“We understand that evolving policies are part of Meta's approach,”
the Council writes. “However, the perception of these changes as finalised and ideologically driven has caused concern worldwide.”
In its letter, the
independent advisory board -- originally founded in 2009 -- states that it was not consulted before Meta made the “sweeping changes,” and thinks this “sets a troubling precedent for
how safety considerations are weighed in major policy decisions.”
advertisement
advertisement
At the start of this year, shortly after President Donald Trump was re-elected, Meta
CEO Mark Zuckerberg announced that the company would be eliminating its fact-checking program and integrating Community Notes -- a user-based content-moderation approach -- across Instagram, Facebook
and Threads.
Zuckerberg also said Meta would be ending “a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream
discourse.”
Not long after, the company altered its hateful conduct policy to “allow allegations of mental illness or abnormality when based on gender or sexual orientation,”
while also removing a long-held policy that barred users from referring to women as household objects or property and referring to transgender or non-binary people as “it.”
The
Council argues that removing traditional proactive detection methods for “borderline” harm -- “harm that may not meet the threshold of illegality but nonetheless affects significant
numbers of young people, women, and other groups” -- will put an “unreasonable burden” onto users, especially groups most vulnerable to long-term, cumulative harm.
With its
massive global reach, the Council argues, Meta sets a standard for not only online behavior but societal norms, and could potentially normalize harmful behaviors by dialing back protections for
protected communities, ultimately “undermining years of social progress.”