Can Curated Conversations Save Civil Web Discussion?

Illustration: @MagnifyMedia

The current state of conversation on the web is a wild and discordant mess, with name-calling, threats, humiliation.

So how did this happen?  Is conversation on the web dead forever, or just resting?

Let’s start with a few things that turned out to happen simultaneously. The fast growth of mobile content consumption made short, text-style interactions the new normal. But at the same time, the political world became more toxic, giving the winning advantage to players willing to cross the line of what had been thought of as civil conversation. So while Donald Trump didn’t invent name-calling and Twitter trolling, he modernized and popularized both practices at the same time.

For me, the moment that the tone of modern conversation changed was the last presidential debate in the 2016 election The question was about Vladimir Putin.  “He has no respect for her,” said Trump as he pointed to Hillary Clinton. “Putin — from everything I see, has no respect for this person.”  



To  which Clinton responded: “Well, that’s because he’d rather have a puppet as President of the United States.”

And Trump fired back “You’re the puppet!”

This wasn’t highbrow debate jousting. It was schoolyard bullying — and the world noticed.

As the web exploded in a burst of profane, sexist, racist, religious intolerance, site managers and technologists paid attention.

Aja Bogdanoff and Christa Morgan thought they had a solution. They co-founded Civil Comments in 2015. with some new thinking about how to crowd-source civility. They had early acceptance. But today Civil is gone, which Bogdanoff explains this way:  “As much as everyone might like to see higher-quality, less-toxic comments on their favorite news sites, the reality is that the number of sites willing and able to pay for comments software of any quality is not large, or growing.”

That appears to be the issue at the heart of it. It’s a problem no one wants to solve. Twitter is at the red-hot center of incivility, and while it’s banning bad behavior, all kinds of trolling remain on the throne. Facebook similarly is trying to build better algorithmic reviews and more human curators — but it’s very hard to define toxic speech when you’re also measured by the growing size of your user base and their use of the platform. It’s kind of like  “haters gonna hate” is the unstated frustration with trying to moderate civility.

Google is focused on trying to build an algorithm that flags messages likely to be perceived as toxic. The tool is called Perspective, but it’s been kicking around for almost two years, and so far there’s no word that it’s solved the problem of the web’s dark side taking over core conversations.

But what if we’re looking at things upside down?

The crop of solutions focus on “toxicity,” defined by the chance that a particular comment will push others out of a conversation, for obvious reasons like it’s disrespectful or rude.

But what if the problem isn’t the civility of the conversation, but the person who starts the conversation?  The open nature of the web and the increasingly abbreviated nature of web comments is going to draw news comments into bipolar camps.

What if conversations started among friends, rather than strangers? What if conversations were, by design, smaller, and someone’s real identity was asked for at the door?  Then you’d have real people meeting at human scale, exploring interesting topics. No one would come in looking to change minds or prove a point in what is certainly a conflict-oriented model. Instead, you’d enter to learn, to share, to query others -- to be part of a curated community.

Some would have you believe that the state of the web today is what we’re sentenced to tolerate forever.

But history says that the web is always changing, and that new platforms replace old ones. Instagram, with its social contract of warm, social, positive posts, is gaining traction, providing a glimpse of other ways communities can behave.

At the end of the day, the web was made by humans, not algorithms.  And we have the ultimate currency to provide new platforms with rocket fuel. We pay with our clicks.

Are we entering the era of curated conversations? I’m betting we are.

2 comments about "Can Curated Conversations Save Civil Web Discussion?".
Check to receive email when comments are posted.
  1. Jenny Mirken from, June 25, 2018 at 3:35 p.m.

    Great article, thanks! As the founder and CPO of a kids' social platform,, I have probably thought about this issue as much as anyone. We did two things on our platform to protect users and foster positive interactions, features that I believe could translate well to grownup products, especially Twitter: 1) No comments. Why are they necessary? For kids, we removed comments bc we could not guarantee that kids wouldn't reveal identifying, therefore risky, information in their public posts. For grownups, I feel like comments are just noise when it's the account owner's content/perspective/opinion I'm following. If someone interests you, follow them. See what they say, engage with their posts directly, re-post their posts even. Comments have become the playground of amateurs, bullies and fame-seekers. 2) Also, on Jet, we only show the number of post likes TO THE POSTER. This helps diminish peer pressure on kids around the number of likes they get, but, more importantly, encourages them to post more! They take more risks being creative, goofy and silly because they aren't publicly judged for their content. I like that Twitter and Instagram are places for pundits and media outlets and celebrities and tastemakers to instantly share their lives and thoughts with the rest of us. I just think we can keep the interaction more about ME and YOU without all the noise of the peanut gallery. If it's a good idea for kids, chances are it's the higher ideal for everyone. (By the way, both of these Jet features came from conversations with kids about what THEY wanted in a better platform.)

  2. Ed Papazian from Media Dynamics Inc, June 26, 2018 at 8:08 a.m.

    Steven, as I'm sure you appreciate, the main problem is potential bias in a "moniter" who deletes posts that are deemed offensive or overly provocative based on the monitor's own biases. Who is going to monitor the monitors? Jenny's second solution, albeit a partial one,  is a very good idea. By not indicating the number of replies---except to the poster---you probably minimize the pile-on factor and disuade many people from joining in just to bully or harass the poster and others who have commented.,

Next story loading loading..