To say that I have been on the fence about using Facebook for quite some time would be a fair statement to make. Over the past year I've constantly struggled with balancing my desire to be connected to friends and putting up with the constant slog through the garbage and emotional baggage that exists on my news feed.

I've continually rationalized to myself that Facebook was worth it. It is a great way to get news out quick for the blog, be it tracking-system bug fixes, sever up and down times, or just for sharing blog posts. It's also pretty fantastic to form groups, like our weekly frisbee group, or when and where some online games are going to be played. For it's purest function, Facebook is fantastic.

But then I read a study in PNAS, and I've decided that I'm out.

titleAt first glance of the title, it sounds like they ran a simple meta-study and looked at how what your friends posted changed how you interact on Facebook. Makes sense right? If you see that one of your friends got hit by a semi while riding a tricycle, that would be sad, and you would probably post some sad things too.

But then I read more, and I can honestly say I got a bit more annoyed. Here is a quick excerpt describing what they did.

exerpt

Essentially, they controlled what was in your news feed.  However, rather than the normal filtering of the mundane, boring, uninteresting things based on your like history .  .  .  or whatever .  .  . they selected specifically positive or negative posts more frequently based on a word detection algorithm. Essentially giving a distorted perception of the emotional states of your friends.

This study raises some pretty large ethical issues. First, did we really agree to let Facebook run any sort of study on us without our knowledge? Second, messing with peoples emotional well-being, especially without their knowledge, is a serious offense. Unless they hand-selected every single participant (which I somehow doubt), how did they control the experiment and remove seriously unstable people who possibly couldn't afford to hear any more sad/bad/depressing news. I'm not saying they caused people to snap, but can they prove they didn't?

The lack of true consent, despite what they argue in the paper as having sufficiently been supplied, is an extreme disregard for responsible research.

exerpt-2

So, does this all matter? Is it significant? They just tampered with a few people's feeds and documented if they changed their posting habits .  .  . nothing too crazy, surely I am just getting worked up over nothing.

sig-measure

Except they did it to over a half million people.  In my opinion, even if just one person was bummed out as a direct result of this tinkering, thats one too many for me. Thanks, but no thanks.

What do you guys think? Does this cross the line for you? Is this just research?