I’ve been in the media business, one way or another, my whole career.
So I know how ratings work. When you make a television program, you want lots of people to watch. But if you’re doing journalism, you balance that need with some other important things. You want to make shows that are factual, honest, accurate and relevant. Making wild claims or accusations that have no basis in fact simply doesn’t meet the standard that content creators should abide by.
But we’re living in a new world. Platforms like Facebook and YouTube, Twitter, and Reddit have been adamant about the fact that they are not publishers or makers of content. In the U.S., the safe harbors of the Digital Millennium Copyright Act protect them against copyright damages -- but also blur the line about what they are responsible for.
But the DMCA was signed into law by President Clinton in October 1998. It was drafted to protect copyright and guard against the theft and publishing of stolen media. The current state of the web was hardly imagined.
Interestingly, the copyright issue that the DMCA was supposed to address has mostly been resolved with software. If you post a video to Facebook with music in the background, you’ll get an instant warning. The music may be muted, or in the case of YouTube, the music owner may get the rights to the ad revenue on your video -- but algorithmic review and takedown notices have made most manual copyright conversations obsolete.
Today, the era of anti-social media is upon us.
The issues facing the platforms are far more complicated than black-and-white questions about stolen music or movies.
Almost no matter what they do, the platforms find themselves in an editorial no-win situation. If they take a post down, they’re attacked for policing speech. If they leave it up, they’re pilloried for allowing their platform to be used by extremists.
“There’s a thin line between disgusting and offensive speech, and political speech you just don’t like. People are blurring the lines,” Jerry Ellig, a professor at George Washington University’s Regulatory Studies Center, told the Associated Press.
Now there’s growing concern that Washington will update the DMCA with new rule-making that will further confuse what seems already to be an unmanageable and conflicting set of demands. Sen. Josh Hawley (R-Missouri), an outspoken conservative critic, has proposed legislation that would require platforms to prove to Washington that they’re not using political bias when they manage content. Failure to prove themselves “bias-free” would have them lose their immunity from lawsuits and financial penalties.
Ordinarily, a threat from a relatively unknown senator would be little more than a footnote. But today, he’s got a big supporter: the President of the United States.
At a “social media summit” held at the White House last July, Trump promised to explore “all regulatory and legislative solutions to protect free speech and the free-speech rights of all Americans.” If you think that sounds a lot like Hawley’s bill, you’re not alone.
The complications in this era of anti-social media can’t be overstated. On one hand, you have a White House using the phrase “fake news” to assail virtually any content that doesn’t directly support administration policies and aim to deify the President himself. At the same time, you’ve got a President who purposely publishes information that he knows to be provably false -- including drawing with a black Sharpie a falsified storm track that has put the National Weather Service in the center of a political firestorm.
Even if Washington’s concerns about bias, Russian trolls, or fake news was an honest take on a complicated issue, having the government legislate content would be a profoundly dangerous attack on the First Amendment. When you layer that danger with the clear willingness of the President to attack journalists, media organizations, and platforms that don’t appropriately genuflect to his whims and commands, the dangers are abundantly clear.