Commentary

Shades Of Grey In Platforms' Content Accountability

“The only thing necessary for the triumph of evil is for good men to do nothing.” —Edmund Burke

Here’s a quick thought experiment: if you are driving a car with four underage drinkers in it, are you responsible for them breaking the law? What if one of the people in the car was selling liquor to the others? What if it was a ride-share system, like a shuttle, where one of your customers regularly sold alcohol to other passengers, serving as a de facto liquor store? At what point do you become responsible for what happens on your platform?

The above may sound like a ridiculous scenario, but it’s not all that far off from some of the thornier issues facing Internet communities. Is BitTorrent responsible for rampant piracy on its platform? Is Twitter responsible for ISIS users? Is Facebook responsible for groups buying and selling illegal drugs?

I asked a friend. “No way,” he said. Why should platforms be accountable for everything people put on their site?

advertisement

advertisement

And maybe they shouldn’t be. In the above scenario, it certainly wouldn’t make sense for Facebook to be prosecuted for drug dealing. But it could make a certain kind of sense for the company to be prosecuted for facilitating the dealing of drugs, if its representatives knew about such activities and had the means to stop them.

Somewhat unsurprisingly, the folks who suffer harm from bad behavior on the Internet tend to believe the platforms should be held responsible. Earlier this week, the Recording Industry Association of America (RIAA) sent a strongly worded letter to BitTorrent, claiming that the latter “facilitated approximately 75% of the over 1.6 million torrent based infringements of our members’ works in the United States upon which a notice was sent in 2014,” and that “[b]ased on a random sample of 500 torrents… 82.4% were found to be… highly likely to be protected by copyright.”

There’s also legal precedent for some level of platform accountability: a law, for example, requiring companies to report child pornography if it’s identified.

Platforms themselves seem to have conflicting opinions on the matter. On the one hand, they defend the idea that they’re not responsible for the bad apples in their barrels. BitTorrent’s chief content officer, for example, has said, “If you’re using BitTorrent for piracy, then you’re doing it wrong.” This week also saw tech companies like Google, Facebook, Twitter and others band together to fight a proposed bill requiring them to report suspected terrorist activity.

But while BitTorrent’s objections may be disingenuous (surely the company must know everybody’s using its software for piracy?), the objection to the anti-terrorist bill is founded on a more substantive argument: namely, that “terrorist activity” has no clear definition -- making it difficult to comply, leading to over-reporting, and potentially infringing upon free speech.

Companies don’t always resist the idea of policing. Two days ago, Reddit decided to ban offensive groups like Coontown. Mashable’s assessment of Reddit's motives was pretty mercenary: “Reddit has struggled to balance the free speech imperative that defined it with the need to eliminate the most offensive posts and communities that received plenty of negative press and likely scared away potential advertisers.” In other words, they only did it because they were losing money. But isn’t it possible that they also did it to create a better environment for their users?

It’s a delicate dance, this: between providing a platform and being responsible for its responsible use, between allowing free speech and cultivating community culture. Its resolution requires intent, openness, engagement in conversation, exploration of nuance. Like so many things in this life, it’s never just black and white.

Next story loading loading..