Twitter Expands Prohibited Content To Include Violent Threats

Continuing its war on haters, trolls and all manner of aggressors, Twitter on Tuesday expanded its definition of prohibited content to include “threats of violence against others or promot[ing] violence against others.”

Previously, under its violent threats policy, the definition was limited to “direct, specific threats of violence against others.”

“Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior,” Shreyas Doshi, director of product management at Twitter, explains in a new blog post.

To enforce the new policy, the social giant is giving its support team the power to lock abusive accounts for specific periods of time.

“This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people,” according to Doshi.

Twitter already had a number of enforcement procedures in place, such as requiring offending users to delete content or verify their phone number. The microblogging leader is also testing a new product feature to help it identify suspected abusive Tweets and limit their reach, Doshi said.

The experimental feature takes into account various signals and context that frequently correlate with abuse, including the age of the account, and the similarity of a Tweet to other content that Twitter’s safety team has previously pegged as abusive.

“It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content,” Doshi assured on Tuesday. “This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.”

Twitter has recently taken a number of steps to make users feel safer and more secure. Last month, it officially prohibited the posting of “revenge porn” and “excessively violent media” on its platform.

Earlier this year, Twitter began streamlining the process of reporting various content issues, including impersonation, self-harm and the sharing of private and confidential information.

Acknowledging Twitter’s shortcoming, CEO Dick Costolo recently sided with critics who accused the company of failing to curb untoward behavior. “We suck at dealing with abuse and trolls,” Costolo recently admitted in an internal memo. “I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd.”

To promote a more positive discourse, Twitter recently helped Dove target nasty comments directed at celebrities on Oscar night. Based on keywords and other tracking analysis, Twitter worked the Unilever brand to identify negative tweets before and during the awards show.

After the malicious tweets were posted, Dove’s team tweeted non-automated responses, including “constructive and accessible advice” to encourage more positive online language and habits.

The #SpeakBeautiful initiative built on Dove’s Campaign for Real Beauty, which has been challenging popular perceptions of feminine appeal for more than 10 years.

Next story loading loading..