News

Was Facebook's Little Experiment Illegal?

by Lulu Chang
Carl Court/Getty Images News/Getty Images

Back in 2012, researchers at Facebook conducted a 700,000-person strong psychological experiment, and didn't tell anyone about it. Including their subjects. Over the course of a week, data scientists manipulated what some of Facebook's users saw on their news feeds in an attempt to determine whether positive or negative posts altered their emotional states. But was Facebook's little experiment illegal?

Many, many people are infuriated by what they consider to be a breach of trust, privacy, and general corporate decency. Critics have accused Facebook of turning their users into lab rats and guinea pigs, running tests and experiments without their subjects' knowledge. Several are calling Facebook's news feed manipulation an ethical disaster, with Kate Crawford, visiting professor at MIT's Center for Civic Media and principal researcher at Microsoft Research, telling the Wall Street Journal, "It's completely unacceptable for the terms of service to force everybody on Facebook to participate in experiments."

Others have voiced their agreement, with David Holmes of PandoDaily calling Facebook "more unethical than we thought." Even Clay Johnson, who founded Blue State Digital, a company that relied entirely on social media and the Internet to help Barack Obama win the presidency in 2008, took to Twitter to express his concern, saying "The Facebook 'transmission of anger' experiment is terrifying."

And Johnson asks a few more pressing questions of just how far such "experimentation" could go. If data scientists are able to manipulate emotions by way of news feeds, where are the upper boundaries of this capability? Could revolutionary sentiments be spread through Facebook to cause political unrest in countries like Sudan or Iraq? Or closer to home, could Facebook advance its own political agendas by suppressing or promoting posts concerning one party or another, all in the name of experimentation?

But perhaps the most salient concern about Facebook's emotional experiment came from Linda Holmes of NPR, who called Facebook's actions "gross." Some of the people who unwittingly participated in the study might have suffered from very serious mental or psychological illnesses, which may have been heightened or exacerbated if their news feeds were doctored to display an unnaturally high level of sad posts. This, Holmes points out, could have been the difference between an ok day and a very bad day, particularly considering Facebook's findings that people are indeed affected by the emotions of their friends' posts.

Facebook's advertising policy, ironically, may have been the reason that the experiment could have unintentionally affected individuals with mental health issues, as their guidelines state:

While Facebook claims that their experiment was meant to create a better user experience for the rest of us, there does seem to be something slightly sinister about the assumption that the social media giant can, quite literally, toy with our emotions on a whim.

This is where the law comes in. According to Susan Fiske of Princeton, one of the Facebook study's editors, "People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty." When it comes to psychological experiments, informed consent is key, and while Facebook is pointing at its blanket data use policy, which broadly allows the company permission to do practically anything with user information once on the platform, several experts are saying that this is insufficient.

James Grimmelmann, a Maryland University law professor, points out that Facebook did not comply with federal regulations when carrying out its experiment, particularly these four components of the Common Rule:

(1) A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any procedures which are experimental;

(2) A description of any reasonably foreseeable risks or discomforts to the subject; …

(7) An explanation of whom to contact for answers to pertinent questions about the research and research subjects’ rights, and whom to contact in the event of a research-related injury to the subject;

(8) A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.

Facebook's Data Use Policy does not specifically address a single one of these issues. If you ever manage to find this page, this is what it says:

Notice that nowhere in there does it explain any procedures for experimentation, identification of experiments, or associated risks. Other than giving you the option to not have a Facebook, this policy also doesn't allow its users to opt out of any experiment participation.

Ultimately, the real kicker in the whole situation is the unparalleled selfishness Facebook displayed by conducting their experiment. As Jacob Silverman, author of the upcoming Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire, "Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that." Facebook won't keep out sad posts to keep you happy, they'll keep out sad posts to keep you on Facebook.

With all the ruckus that the experiment has caused, even Facebook is backing away from it and calling it a mistake, with lead data scientist Adam Kramer issuing an apology of sorts on his own page. Said Kramer, "In hindsight, the research benefits of the paper may not have justified all of this anxiety." To which we say, gee, ya think?