Twitter Tests New 'Safety Mode' Feature To Block Harmful Content In Replies

On Tuesday, Twitter announced that for the next few months it will begin testing its long-awaited “Safety Mode” feature and expand the pool of beta testers. 

Users allowed access to “Safety Mode” will be able to find it in the “Privacy and Safety” tab in the app. This new feature will then grant these users the ability to auto-block potentially problematic or harmful accounts for up to 7 days.

The example Twitter gives, in a short video, is of a young woman dressed in sportswear, commenting to her friend that she thought the referee made a great call. Suddenly, out of the sky come looming faces of other people –– strangers –– who bash this woman’s statement, disagreeing with her point of view on this supposed call. With “Safety Mode,” the woman is able to silently banish the torrent of replies from "haters" and return to peace and quiet.

This is an example of the relief and peace of mind that Twitter hopes this tool gives its users -- many of whom have likely been on either end of a hateful and judgmental thread.

But while “Safety Mode” will shield users from negative replies and mentions, it could also have an impact on user accountability. For example, users will not be prevented from commenting in hateful language themselves when they know they will be allowed to tune out the feedback.

On Twitter, many users commenting on this announcement have cited their preference for a post-tweet editing button –– a long-awaited feature of the Twitter community and one that, according to former CEO Jack Dorsey, will likely never happen.

It may be too early to tell how “Safety Mode” will impact users' well being, but it is a step toward greater privacy and safety in an industry plagued by hate-speech and misinformation, following the recent “Warning Prompts" feature the social network has been testing, in which users posting potentially offensive or harmful material are faced with a literal prompt asking them if they would like to edit or delete their intended post

A Cornell University study on these prompts found that they significantly reduce harmful content.

Next story loading loading..