Facebook once again found itself at the center of a controversy this weekend, when industry watchers learned that the company conducted a social experiment on some 700,000 unsuspecting users.
For
the test, Facebook manipulated users' news feeds to deliberately filter out some positive or negative posts. Researchers then observed people's reactions and concluded that mood was
“contagious,” with users' responses matching the tone of the posts they saw. That is, people shown more negative posts themselves began posting more negative material, while those shown
more positive ones responded with happier posts. Facebook gathered the data, which was analyzed by researchers from Facebook, the University of California, San Francisco and Cornell. The findings were
published earlier this month in the Proceedings of the National Academy of Sciences.
The findings in themselves aren't particularly controversial. But Facebook's methodology, which required the company to
manipulate its users, is another matter. This weekend, as news of the study hit the mainstream media, numerous commentators criticized Facebook for conducting psychological experiments on people
without their informed consent.
Facebook takes the position that its users consent to virtually any and all use of their data by signing up for the service. But agreeing to a service's terms
of use isn't the same as giving “informed consent” to participate in an experiment -- at least, not the way that term is used by researchers.
“This study is a scandal because
it brought Facebook’s troubling practices into a realm -- academia -- where we still have standards of treating people with dignity and serving the common good,” University of Maryland law
professor James Grimmelmann wrote this weekend in a widely circulated blog post. “The sunlight of academic
practices throws into sharper relief Facebook’s utter unconcern for its users and for society. The study itself is not the problem; the problem is our astonishingly low standards for Facebook
and other digital manipulators.”
In general, researchers must describe studies in detail to the subjects, and also outline any potential risks, in order to obtain their informed
consent.
News of the test also points out how little people actually know about Facebook's news feed algorithm. Some users probably know that Facebook filters the
stories that appear in their feeds, but don't know exactly how.
It's safe to assume, however, that few people thought Facebook would pick and choose the stories to display in order to
manipulate users' moods. In fact, until this study was published, the suggestion that Facebook weeded out certain posts from users' news feeds for the express purpose of making them feel bad would
have sounded implausible at best.
Yesterday, one of the study's authors, Facebook's Adam Kramer, said that “the research benefits of the paper may not have justified all of this
anxiety.” He added that the experiment was conducted two years ago. “We have come a long way since then,” he wrote on his Facebook page.”
But that statement just
raises other questions, including how many other social experiments Facebook has been conducting on its users. After all, it took two and a half years for this one about mood contagion to come to
light.
News of the study also raises the question of whether Facebook will once again find itself facing legal action. Even though the company's terms of service allow it to draw on users'
data for “research” purposes, those types of vague clauses don't always scuttle litigation.
The Federal Trade Commission also could potentially accuse Facebook of deceiving users,
or of engaging in unfair practices. Of course, it's too soon to know whether regulators are inclined to get involved, let alone whether they would have a good case. But it wouldn't be surprising for
watchdogs to come forward this week and ask the FTC to investigate.