Found something Sunday on my Facebook feed from my cousin Leah. It was a clip from a 1988 BBC special about Sir Nicholas Winton, who had
saved 669 Czech Jews in 1938 and 1939. The then 79-year-old humanitarian is seen in an audience among hundreds of others, and when the host asks if anyone in the theater owed his or her life to Sir
Nicholas, they all rise. Taken by surprise, the old man came to tears.
Me too.
Two years ago, I might not have seen that post. At the time, Facebook was conducting its now infamous
experiment in mood manipulation by filtering out certain content that was emotional in nature, positively or negatively. The test on 689,003 unwitting users employed sentiment analysis software to
locate emotionally telling words -- “tears,” for example -- to create a baseline against which to measure the activity around emotion-triggering content.
The goal was not to help
society come to grips with its emotions. The goal was to learn how to increase activity on Facebook. And the oblivious 689,003 were the guinea pigs. Naturally, a number of Facebook users/emotion
havers have freaked out at this revelation.
advertisement
advertisement
Now, filtering out heartwarming and infuriating social-media messages isn’t exactly Tuskegee. No harm came to anyone, unless lost sniffles
rises to the level of harm. And there’s no evidence that the subjects were chosen based on powerlessness, marginalization or any other criteria bespeaking special vulnerability. But that misses
the point.
The point is that they were subjects of a scientific experiment without their informed consent -- which, simply put, is unethical.
Facebook first responded to the outrage,
predictably, by saying, essentially: “Nuh uh! Using user data for research is covered in the terms of service!” -- which, simply put, was a lie. A double lie. When the experiment was
conducted, that language had not yet been unilaterally and ex post facto inserted into the terms of service. Also, there is a great difference between crunching data based on organic user
experience and surreptitiously altering user experience to see what happens.
One of the researchers also had the gall to observe that the 689,003 were anonymous to the researchers -- as if
this were a privacy matter. It is not a privacy matter. It is a tyranny matter. But it gets worse. CEO Sheryl Sandberg, who has spent the past three years circling the globe telling women the right
way to live, leaned back and dismissed the controversy as an endeavor “poorly
communicated.”
Uh huh -- perhaps in the way Pearl Harbor was poorly communicated. The thing is, when one is doing something sneaky and objectionable, the essence of the thing is, ahem,
zero communication. In other words, simply put, she is full of shit.
And entirely in character for her and the company she and Mark Zuckerberg lead. Again and again, the company makes unilateral decisions that affect users and advertisers, often obscuring them
in terms-of-service boilerplate, and owning up to them only when they are discovered, whereupon they rescind the decision and congratulate themselves for their heroic ability to take criticism to
heart.
Of course, we only know about the stuff they get caught at. We don’t know what lurks undetected, do we? You are, no doubt, familiar with the truism “It is better to ask for
forgiveness than permission.” The Facebook motto might be “What can we get away with?” Or, perhaps, “Users? Fuck them.”
Pretty ironic, actually. All throughout
the corporate universe, and among all other public-facing institutions, the stewards are learning that they must be transparent lest they face the wrath of the public in social media. Yet Facebook --
social media behemoth -- operates with a mentality of impunity that is as retrograde as it is breathtaking.
Because it can. So far.
History records the calamities that befall the world
when too much power is concentrated into few hands: Nazi Germany, Standard Oil, Kevin Costner.
We should be afraid of a company that serves 15% of mankind and treats them like lab rats...or
worse. The Wall Street Journal reported that concurrent with the emotions experiment two years ago, Facebook -- in designing anti-fraud measures -- informed thousands of users that they would
be blocked from the site unless they could prove their users were humans, not bots -- this despite knowing all along that the users were human indeed.
Yes -- the researchers intentionally
alarmed and frightened their subjects. I mean, I guess those people were alarmed and frightened. If they sent out posts to their friends describing those emotions, the content may have been
blocked altogether.