Commentary

Dangerous Speech: When Should Platforms Draw A Line?

Speech is free. But that doesn’t mean there isn't a cost.

The NRA spokeswoman Dana Loesch said journalists were  “the rat bastards of the earth” and  “need to be curb-stomped.”

What does that last phrase mean? To place someone’s mouth on a cement curb, and then stomp on their head from behind to break their teeth. It's clearly a call for violence.

Loesch has since said she was taken out of context. A video from Loesch on the NRA YouTube channel issues another threat to journalists: “Your time is running out.”

Milo Yiannopoulos, the former Breitbart blogger who was fired after endorsing pedophilia, told a reporter he “can’t wait for the vigilante squads to start gunning journalists.” He later said his words were a "joke" after a gunman opened fire in the Capital Gazette newsroom in Annapolis, Md. last week, leaving at least five people dead and several others seriously injured.

advertisement

advertisement

In its current incarnation, the internet publishes and amplifies speech with impunity. That was the idea: to connect voices and share ideas.

But now, platforms are facing the reality that they may be fostering hate speech, bots, and trolls without any barriers or bumpers. And so, technologists are now concerned about the misuse of their platforms.

What if free speech crosses the line drawn by Supreme Court Justice Oliver Wendell Holmes Jr. in the Supreme Court's 1919 case of Schenck v. the United States: “The most stringent protection of free speech would not protect a man falsely shouting fire in a theater and causing a panic.” The Court ruled unanimously that the First Amendment, though it protects freedom of expression, does not protect dangerous speech.

Last week, I posted my weekly podcast "Future Forward" with my editorial partner Gene DeRose. As I do each week, I set up an ad promoting the podcast on Facebook. But the ad was rejected and I was told that I'm not approved to post political advertisements.

The podcast explores many topics, but one of them was about the "I don't care" statement that was emblazoned on Melania Trump’s jacket as she flew to see children being held in cages.  You can listen to that podcast by clicking here.

Facebook wanted me to apply to post political ads, and asked for my name, address, phone number and either a passport or photo ID. I provided all of these things. It's been a week, but so far, I'm not able to advertise the podcast. Is it a political ad, or a piece of journalism? How was this podcast targeted? And where is the line between algorithmic review and human editorial judgment?

Then, as the week progressed, another terrible headline came with the Annapolis shootings. Years ago, working with the Committee to Protect Journalists, I'd directed a documentary about journalist being targeted and killed around the world. The film, "Journalists Killed In The Line of Duty," had been a painful and personal effort. It aired on Trio, and honestly, the stories seemed disturbing but far away.

Last week, I posted the documentary online, sharing the segments on YouTube.

Again, the publication was targeted. I was told that the segment about Raffaele Ciriello, an Italian freelance photographer who was killed in 2002, was in violation of YouTube community standards.

I sent in an appeal and got back this response:

"Thank you for submitting your video appeal to YouTube. After further review, we've determined that while your video does not violate our Community Guidelines, it may not be appropriate for a general audience. We have therefore age-restricted your video. For more information please visit the YouTube Help Center.

Sincerely,  

The YouTube Team"

Twice in a week, two pieces of serious, professional, timely journalism hit an algorithmic wall.

So the NRA continues to share its video "Curb Kicked" and Yiannopoulos debates his suggestion that vigilante squads should “take out” journalists.

Platforms like Facebook and YouTube struggle to balance free speech with the need to limit and take down hateful speech and dangerous content.

Free speech robots: We can't know if they're succeeding or failing. We judge what we don't see. But in my experience, that review process and limitations on advertisements and age limits are being used more as limits on legitimate and relevant speech.

It’s not hard to find deeply objectionable videos on YouTube. They appear without warning or without age limits. Videos about the wildly popular teen game Skyrim feature and promote domestic violence and clearly adult sexual themes.

For example, the video “Domestic Abuse in Skyrim” has been viewed 2,171,509 times. Another, “Skyrim Top 10 Sex Mods,” has been screened 808,552 times. Neither has tripped a YouTube restriction.

If YouTube is blocking a news video, but presenting domestic abuse, violence, and sex without limitation, then what is really being blocked? It may be that the story of Raffaele Ciriello -- a journalist being killed in the West Bank in Israel -- crosses the line. Can real-world death be censored, while cartoon violence remains in full view?

How does this NRA video not cross the Supreme Court’s "dangerous speech” line? Watch for yourself here.

1 comment about "Dangerous Speech: When Should Platforms Draw A Line?".
Check to receive email when comments are posted.
  1. Paula Lynn from Who Else Unlimited, July 3, 2018 at 5:27 p.m.

    Is there a difference between censoring death and destruction in media and a medium to just eliminate support for the censoring of the questioning of what should be done and for whom ?

Next story loading loading..