YouTube announced its decision to turn off comments on millions of videos showing minors just days before social-media site Gab released software that creates independent comments sections for
websites.
Both companies have stirred controversy in handling user-generated content. Publishers are well familiar with the pitfalls of giving anonymous internet users a digital soapbox.
Brands like Disney, Hasbro and AT&T pulled their ads from Google’s YouTube video-sharing platform after a vlogger showed how alleged pedophiles had posted inappropriate comments below
videos of children. Those posts include timestamps that pointed out compromising segments of little girls doing gymnastics poses, demonstrating their “morning routines” or licking
Popsicles.
YouTube responded by disabling most of the comments sections on videos of children — a few high-profile creators need to actively moderate their comments — and
fast-tracked the development of new technology to identify and block predatory comments.
Free-speech advocates who don’t want to see a handful of tech giants limit civil discourse are
right to fret over YouTube’s broad power to restrict online discussions that a handful of trolls have hijacked.
Steven Rosenbaum, a senior adviser at investment bank Oaklins
DeSilva+Phillips, has an excellent analysis in his column for Media Insider that discusses YouTube’s decision and the history of websites that have disabled comments sections to stifle
trollish behavior.
Any company that seeks ad dollars needs to ensure that its platform is “brand-safe” or face advertiser boycotts. While traditional publishers have editors to
maintain standards, younger platforms like Facebook and YouTube, with vast troves of user-generated content, have struggled to prevent objectionable posts from pedophiles, terrorists and hate
groups.
Gab, a social network that champions free speech, this week introduced a browser extension that lets web users read and post comments on sites that have disabled reader comments. Its
“Dissenter” extension has aroused worried remarks from websites such as Engadget, Vice News, The Daily Dot and CNet.
Most of the commentary focuses on how Gab attracts
right-wing extremists who have been banned from other social platforms. Gab entered the national spotlight after reporters found that Robert Bowers posted anti-Semitic remarks on the site before
allegedly murdering 11 innocent people in a Pittsburgh synagogue in October.
Joe Mandese, the
editor in chief of MediaPost, wrote a column about Gab and its
unfortunate power to organize hate speech.
I’ve had a Gab account for several years, and use it mostly to post links to stories I’ve written, not to seek out posts from hate groups
and people advocating violence.
I also like the idea of having a competitor to Twitter, which has generated its own controversies for hosting remarks from anti-Semites like Nation of Islam
leader Louis Farrakhan.
Twitter also hosts anti-Semitic tweets from Rep. Ilhan Omar (D-Minn.) without banning her. She now faces another backlash in the House after refusing to apologize for yet another hurtful remark that
propagates painful stereotypes.
Twitter and Gab let their users block other accounts so you don’t have to see remarks from unhinged extremists, stalkers or trolls. As a reporter,
I’m reluctant to block users of either platform, because doing so creates a blind spot in monitoring what people are saying.
I also love reader comments, and I’ve heard enough
criticism over the years to not take it personally when readers describe me as “an idiot,” “a smug elitist” or “a total waste of protoplasm.” For that reason,
I’m generally in favor of keeping the reader comments on.