InvestSMART

Facebook's 'creepy' psych experiment is just a beat-up

Despite the outrage in the media, a little bit of context suggests the social network's research around manipulating its users' emotions is neither controversial nor unethical.
By · 1 Jul 2014
By ·
1 Jul 2014
comments Comments
Upsell Banner

During a single week in January, 2012, Facebook conducted an experiment involving almost 700,000 users. By scaling back the number of posts containing positive and negative words, Facebook ended up validating their hypothesis that emotional states “can be transferred to others via emotional contagion” -- or, as my mother used to say, smiles last for miles and a frown can get you down.

Quietly releasing the results of its two-year-old study in the Proceedings of the National Academy of Science last week, Facebook could hardly have anticipated the outrage.

The tempest reminded me of a similarly short-lived furore at the end of last year, when outgoing Senator Jay Rockefeller held hearings into the practices of 'data brokers' who maintain databases of consumer information. Those hearings inspired steamy op-eds and a 60 Minutes segment that hinted our personal medical records are available for sale online -- a practice that is absolutely illegal.

Do words matter?

Back to Facebook. The study itself strikes me as being routine, legal, ethical and unsurprising. It’s actually more interesting for what it gives away between the lines that in its widely-reported findings. We’ll get to the outrage and academic defenses in a moment, as well as what the study really tells us. For now, here’s the between-the-seams “tells” I detected.

First, Facebook says it analysed three million posts containing 122 million words, of which four million were positive and 1.8 million were negative. A bit of basic maths here tells us the useful tidbits that:

  • The average post is 40 words long
  • Positive words are two times more common than negative words

Second, why did Facebook do the study at all? I suspect the answer is embedded in the report itself. Consider the context: early 2012. Instagram and Pinterest are sizzling new social networks that are almost entirely visual. Facebook content is a mix of visual and verbal. Facebook is wondering: is the future photo-only? In academic terms, the study addresses this question as an attempt to determine if 'nonverbal cues' (i.e., images, tone of voice) are necessary to elicit an emotional response.

In other words: do words matter? The study is purely text-based and concludes, yes, they do. Words alone can make us feel emotions. (This is a relief to those of us who are authors.)

I also think Facebook was reacting to a popular book. In 2011, Sherry Turkle published Alone Together: Why We Expect More From Technology and Less from Each Other, which made an oft-repeated claim that seeing our friends’ good times stream past us all day on our social feeds actually makes us depressed. We compare our insides to other peoples’ outsides and it brings us down, man. 

Facebook data scientists probably realised they could take this calumny head on. And in fact, the final point made by the current study’s authors reads as a direct rebuttal to Sherry Turkle:

“The fact that people were more emotionally positive in response to positive emotion updates from their friends, stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.”

So there. As I said, the reaction to these rather common-sensical findings ranged from a legalistic squib in Slate (conclusion: “It’s made all of us more mistrustful”) to a more measured but still ominous dissection in The Atlantic titled “Everything We Know About Facebook’s Secret Mood Manipulation Experiment". After which came a series of defenses from academics, who made the point that everything we see on all our social networks -- and on most websites, period -- is designed to manipulate us somehow into engaging, sharing, buying, shopping, staying, liking, and so on. And it’s only going to get worse, believe me.

One of the academic defenses, from Tal Yarkoni, strummed a refreshingly cynical chord:

“Everybody you interact with -- including every one of your friends, family, and colleagues -- is constantly trying to manipulate your behaviour in various ways.”

In other words: man up, people.

A couple of final points that I think weren’t stressed enough, before this controversy too fades to black and we go back to being happily manipulated by our social feeds.

  1. How did Facebook determine whether a post was 'positive' or 'negative'? -- It used something called the Linguistic Inquiry and Wordcount software LIWC2007, which simply counts words (i.e., 'hate' is bad, 'love' is good, etc.). However, this method is notoriously unreliable -- especially on social networks, where posts are too short and usage too quirky and ironic for such methods to work. I’d be surprised if this automated assessment was 50 per cent correct. (This issue wasn’t raised in the study, and others have taken notice.)
  2. The impact is extremely small -- Surprisingly small, in fact. Eliminating emotionally-laden content (positive or negative) tended to shift response about 1/50th of a standard deviation, which is almost trivial. So emotional words may impact us, but not very much. The study also had no way of measuring impact on people’s minds, unless they happened to express that impact in a post later. (Silent sulking went unnoticed.)
  3. What is 'informed consent'? -- In case you don’t already know it, let me be clear: if you are online, someone is trying to manipulate you. You are being served experiments continually and aggressively -- different versions of ads, web content, 'Click Here!' buttons, images, background colours, offers, products... anything. Your reactions are watched and that information is used to improve the manipulation. Marketers call this 'targeting' and it is the whole reason so much of your content is 'free' anyway. You get nothing for nothing. Advertisers pay and they want something in return.

If we start demanding an academic standard of 'informed consent' for routine A/B and multivariate tests run online, we’re skirting the boundaries of absurdity. What do you think?

Martin Kihn is a research director at Gartner Research. This post was first published on Gartner’s blog platform. Republished with permission.

Share this article and show your support
Free Membership
Free Membership
Martin Kihn
Martin Kihn
Keep on reading more articles from Martin Kihn. See more articles
Join the conversation
Join the conversation...
There are comments posted so far. Join the conversation, please login or Sign up.