Facebook Experimented on 600,000+ Users Without Explicit Permission

Facebook Experimented on 600,000+ Users Without Explicit Permission

What if someone did an experiment on you without your knowledge?  And in that experiment, what if they manipulated your emotions to see how far they could push that manipulation?  Would you be okay with it?  And what if that experiment was justified because, at some point, you agreed to a Terms of Service for using a website that said that the company could use your data in any way they saw fit? Congratulations, you’ve just been experimented on. Now to be clear, it was only 600,000 users out of the millions Facebook had, so the actual scope of the experimentation is very small, but that notwithstanding, the ethical problem is overwhelming and something we really need to consider. Researchers wanted to know if emotional contagion, the idea that the people around you can manipulate your emotions, would apply to people online as it does in real life.  From New Scientist:

A team of researchers, led by Adam Kramer at Facebook in Menlo Park, California, was curious to see if this phenomenon would occur online. To find out, they manipulated which posts showed up on the news feeds of more than 600,000 Facebook users. For one week, some users saw fewer posts with negative emotional words than usual, while others saw fewer posts with positive ones.

Digital emotions proved somewhat contagious, too. People were more likely to use positive words in Facebook posts if they had been exposed to fewer negative posts throughout the week, and vice versa. The effect was significant, though modest.

Interesting, right?  I mean I’m not going to lie; the conclusion is very interesting because it shows that emotional contagion is just as valid online as it is in the real world, but do the methods used to arrive at the conclusion seem ethical?  Absolutely not.

It’s clear that, at the least, this was unethical.  The legality of it is not open to question because your data and your use of a service does not include the right to have all your information presented equally and to not have things changed as the company sees fit, but even the most irrational reactions I’ve seen to this story are not claiming anything about the legality of it.

The ethics, however, are shocking.  Facebook let people believe that they were having a certain experience when they weren’t.  On top of that, they let people experience emotions, some detrimental, for their “research.”  How do we allow such a thing to happen without a revolution of some kind?  Or have we gotten so complacent that being manipulated in such a way doesn’t even register on our anger meter any more.

The implications are huge.  Facebook, through these unethical experiments, have basically learned that they can artificially program you emotionally.  What if they always want you to be happy?  Well then they show you happy posts.  What if they only want you to feel warm and fuzzy thoughts about an advertiser?  They can highlight posts praising those advertisers and you would feel warmth toward them.  They could, theoretically show you an ad on the right side of the screen, manipulate your feed to show that advertiser in a good light, and get you to buy a product all without your knowledge.

Scared yet?  Because I sure as hell am.

In fact, in a subtle nod to the level of manipulation Facebook can do, in spite of the fact that everyone I know is talking about this “experiment,” here’s the trending topics Facebook is showing me as I write this:

skitch

Funny but something strikes me as missing from that list…

I should note, because I think it’s important, that I don’t mind this personally as much as I should.  While I’m shocked that they would engage in something so clearly unethical, I understand that things like this may happen when you give so much of your data to one company and trust it to keep it relatively safe.  We do it with lots of online services.  Think of the profile Google has on you.  Or maybe Microsoft.  They could probably reliably construct who you are and what you’re about also.

But that doesn’t mean I excuse Facebook for what is a clear violation of every accepted ethical norm, particularly in studies of people.  The authors of the report argue that since the text of messages and communications were not read by human eyes and were instead read by an algorithm, they didn’t run afoul of the Privacy Policy Facebook has in place protecting users (notice, however, they don’t address the expectation of data integrity, proving to me that they understand their study is problematic ethically).

They even go as far as naming the people at Facebook who worked with them so as to make it more convenient for people to vent their outrage.

We thank the Facebook News Feed team, especially Daniel Schafer, for encouragement and support; the Facebook Core Data Science team, especially Cameron Marlow, Moira Burke, and Eytan Bakshy; plus Michael Macy and Mathew Aldridge for their feedback. Data processing systems, per-user aggregates, and anonymized results available upon request.

Make note of those names, people. The most frightening thing is the technical argument about informed consent.  I understand that their definition of informed consent is the barest of the bare and that it would probably pass legal muster should they get sued, but when you consider what could have resulted from this study, you realize that this was a really bad idea.

Since we know the outcome, and that emotional contagion happens in a social network, what if one of the 600,000+ subjects in this study didn’t have stable mental faculties and they drew the “negative” card?  What if they were ready to put a gun in their mouth and pull the trigger and the onslaught of negative emotions brought to their newsfeed pushed them over the edge? If someone was informed about the study happening they could opt out, knowing they couldn’t handle the kind of experimentation that was about to happen, but if they couldn’t and the candidates weren’t vetted (and we know for a fact they weren’t) how ethical can you possibly call experimenting on people without knowing their mental state, ability to cope, etc.?

That’s the scariest part of the whole thing in my eyes; Facebook could’ve literally experimented someone into killing themselves.  We can’t work on hypotheticals, but hypotheticals can act as a warning and can help us make better choices and no matter how you slice it, Facebook made a bad one here. I’ve requested the anonymized data so I can look into it and I’ll report back once I have a chance to look it over.  I also told one of the authors that I may have some follow-up questions about the report if they were open to that, so we’ll see how that goes and if they respond.  I’m hoping they do because I’d really like to explore the mindset of the people who did this study a little further. In the meantime, Adam Kramer, the lead author in the experiment, explained himself on Facebook.

OK so. A lot of people have asked me about my and Jamie and Jeff‘s recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.

Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.

And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.

The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.

Do with that what you will.  One thing’s for sure; they don’t strike me as apologetic, just explaining.


Header Image via Robert Scoble on Flickr

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s