Hoping to regain users' trust and shed more light on its “community standards,” the company is specifically sharing internal guidelines worldwide for defining hate speech, violence, nudity, terrorism and other unacceptable content.
“You have told us that you don’t understand our policies,” Monika Bickert, vice president of global policy mnagement, notes in a new blog post. “It’s our responsibility to provide clarity.”
In another first for Facebook, the company is establishing a new appeals process so people and organizations can challenge decisions about particular posts.
Officially, Facebook’s content policy team is responsible for developing its community standards, which its says evolve over time.
Spanning 11 offices around the globe, the team includes subject matter experts on issues such as hate speech, child safety and terrorism.
Facebook also works with outside experts, including academics, non-governmental organizations, researchers, and legal practitioners.
On hate speech, for example, Facebook has sought the counsel of Timothy Garton Ash, an Oxford University professor. Ash is credited with creating the “Free Speech Debate,” which looks at these issues on a cross-cultural basis.
For Facebook, the changes are part of a broader effort to earn back user trust following the Cambridge Analytica controversy.
Earlier this week, the embattled tech titan once again reiterated its promise not to sell user information or disclose their personal identities to ad partners.
Ahead of a comprehensive European data law set to take effect next month, Facebook also began offering users more control over their profiles and personal information.
To comply with the new law -- known as the General Data Protection Regulation (GDPR) -- the company plans to specifically ask consumers if their data can be used to power targeted advertising and facial-recognition technology.