If you read about the latest YouTube controversy and just skimmed the headlines, it could seem like the unit of Google was acting to protect children from predators. Certainly, that’s what we want them to do.
But the decision to block users from commenting on most videos that feature kids younger than 13 isn’t a solution. It’s far worse than that. It’s part of a problem that web platforms have been ignoring and dancing around almost since the open web began.
The line between free and objectionable speech isn’t one easily solved by algorithms And, as platforms like YouTube and Facebook are steadfast in their role as platforms, not publishers, they walk an increasingly complicated line trying to keep their platforms safe for advertisers, while at the same time not becoming active editors of the content or comments that provide the traffic advertisers are paying for.
It may be that tightrope walk is about to become impossible to balance.
Certainly, they’re not alone in facing this dilemma.
A decade ago, NPR introduced its reader commenting system, noting, ”We are providing a forum for infinite conversations on NPR.org. Our hopes are high. We hope the conversations will be smart and generous of spirit. We hope the adventure is exciting, fun, helpful and informative."
What followed was, by any reasonable measure, a disaster.
After eight years of toxic, hateful, troll-driven posts, NPR announced the end of user comments. "After much experimentation and discussion, we've concluded that the comment sections on NPR.org stories are not providing a useful experience for the vast majority of our users," wrote Scott Montgomery, former managing editor for digital news, in his 2016 farewell-to-comments address.
Simply put, the trolls won. Just to be clear, trolls are the loudest voices in the room, the ones who post controversial, abusive, hateful things — looking to get people angry, with little concern for whom they hurt or the ramification of their words.
Comments have been under attack across the web.
Motherboard dropped its comments section in 2015. Bloomberg, The Verge, The Daily Beast have all removed comments since then. Even Huffington Post, which in its prime had a passionate and engaged commenting community, has disabled community conversation.
So, what happened to HuffPost? Was it a conspiracy to silence trolls and conservative commentators? Or a conspiracy to silence what one poster called “the lefties”?
Paige Harmes, a former community manager at Huffpost (2008-2017) posted this on Quora: "The actual, real answer is that on June 14th of 2017, huffpost, after the merger with Yahoo and rebranding as Oath, cut approximately 2,100 of its staff… including the entirety of its moderation and community team. Therefore, their live comment model was no longer sustainable, and thus terminated after this was realized by senior staff. That's it. That's all. It had nothing to do with right or left wing content.”
So why does what YouTube does really matter now?
YouTube’s decision to ban comments on videos that feature kids under the age of 13 teaches the trolls an important lesson. Flooding a site with hate speech is certain to get a predicable response. Unable to create a technology to facilitate civil conversations, tech giants like Google will simply make a business decision. What’s worth more to them, the First Amendment — or the happiness and ad dollars of big consumer brands?
It’s not a hard call. Google isn’t a publisher, and it has no obligation to operate in the public good. Its clients don’t want to be near objectionable content. So the expeditious solution is to shut off discussion and dialogue.
The real solution is painfully simple, and highly unlikely. It’s time for Google to stop hiding behind its safe-harbor provisions of 1996 Communications Decency Act. The act provided immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users, saying in part: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
No liability. But as Facebook and Google continue to siphon away the advertising revenue that has for generations funded the creation and publication of news, it leaves readers in a painfully untenable position. The institutions who are receiving the lion’s share of the revenue take no responsibility for the content they distribute.
Shutting down comments is the right decision to protect revenues, but it’s the wrong decision if your goal is to foster and benefit from a robust and engaged community with conversation, dialogue, and debate.
Algorithms can’t set or manage editorial standards. But human editors and community managers most certainly can.
YouTube had a decision to make, and its public comments make the decision clear. "Over the past week, we disabled comments from tens of millions of videos that could be subject to predatory behavior. Over the next few months, we will be broadening this action to suspend comments on videos featuring young minors and videos featuring older minors that could be at risk of attracting predatory behavior.”
These steps gives a road map to trolls who want to shut down civil conversation. All you need to do is post objectionable content on any thread or channel, and if it’s done at large enough scale, YouTube will remove comments. What about the #MeToo movement, or the kids from Marjory Stoneman Douglas High School and their March For Our Lives posts? Will trolls be able to get their voices silenced and discussion removed from commercial web sites?
Why not hold the people publishing predatory speech responsible, rather than limit the speech of reasonably civil conversation-makers?