I really love the Marvel universe, and especially “Guardians of the Galaxy.” I watched GOTG 1 and 2 on a long-haul flight in the Before Times and had to be shushed for laughing so hard. But I’m not particularly loyal to Marvel. Right now, for example, I’m churning through “Jupiter’s Legacy” on Netflix.
These movies and shows scratch an itch in me: the itch for clarity. Good, evil, great power, great responsibility, etc. No question about who I’m cheering for. Superhero movies -- even when they try to create complex, layered, conflicted characters -- always make it abundantly clear who is the good guy and who is the bad.
Here in the real world, I often find myself wishing for that same level of clarity. With all the complexity and uncertainty in the world, it’s a huge relief when something, anything, can be readily identified as friend or foe.
One of the few things that fit that bill in the past decade was Citizens United, the 2010 Supreme Court ruling that allowed corporations to spend unlimited private money in support of political causes. It’s an easily detestable decision, one which had the effect of “weaken[ing] political parties while strengthening single-issue advocacy groups and Super PACs funded by billionaires with pet issues,” according to a study done this year.
Which was why I found it so frustrating, a few years ago, to listen to an episode of the ”More Perfect” podcast in which they broke down how we got to Citizens United, and the arguments for and against. I wanted to know who the bad guy was and hate cleanly -- but I couldn’t. Based on the outcomes, I still believe it was a terrible decision, but I understand the rationale.
I’ve got the same internal struggle when it comes to social media and its ability to control who gets to speak and who doesn’t. It’s a topic that’s back in the news this week, with Facebook realizing it might not be the smartest policy to only allow really important or famous people to incite violence.
It seems obvious that if someone is really important or famous, they should be held to a higher standard, not a lower one. But Facebook’s take has always been to weight the “newsworthiness” of a post or action against its propensity to cause harm.
Facebook is now abandoning the newsworthiness piece of the puzzle. If your content has sufficient propensity to cause harm, you should find yourself on the outside of the walled garden no matter who you are.
Simple. Clean. An unequivocal good move. Why don’t I feel more elated?
Maybe it’s because it shouldn’t come down to one young, unelected, tech billionaire to decide what has the propensity to cause harm -- to effectively be the Supreme Court of the World.
Maybe it’s because it is patently obvious that Facebook doesn’t yet have the ability to credibly distinguish what has the propensity to cause harm. Just last week, for example, it deleted, because of “nudity,” archival images of people in traditional dress in Papua New Guinea.
I’m cherry-picking, of course. Facebook strategists will tell you they successfully deal with the vast majority of objectionable material, that their AI scanners pick it up and prevent it from going viral. But they can’t distinguish between someone in traditional dress and pornography.
Removing the newsworthy exemption may be a much-needed step in the right direction, but Facebook is definitely not a safety superhero.