Social Media, Emotions and Deception: The Facebook Experiment [Part 1]

The recent response to the Facebook experiment with user’s newsfeeds can tell us something about how people understand three topics that are of interest to me. First, the way in which social media figure in human relations; second, how emotions and affect are conceptualised; and third how we understand lies and deceptions as part of everyday life. In this post I’ll mostly be dealing with the social media issue and emotions, and will be getting back to lying and deception in future posts.

The Facebook experiment, conducted in collaboration with researchers from the University of California and Cornell, was designed in order to understand whether emotional states could be transmitted by ‘emotional contagion’. The site removed either positive or negative stories from a sample of user’s feeds, which meant that some people were shown an unusually high number of positive or negative stories. The site didn’t invent or add any material, they just changed how items were selected from the pool of possible newsfeed stories. They then monitored those user’s own posts and determined whether they were more likely to post positive or negative stories as a result of this subtraction from their feeds. The results show a small but significant effect of 0.001, meaning that an increase in positive newsfeed items led to an increase in user’s own positive statements on Facebook and vice-versa. The significance of the effect is perhaps misleading and when viewed in light of the scale of Facebook it certainly looks more note-worthy, as the authors point out: “…given the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences… an effect size of d = 0.001 at Facebook’s scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.” So it seems that the results do show something, though what exactly I’ll get to in a later post. For now, let’s focus on the way in which the experiment has been received and what this might tell us about how we feel about advertising and social media.

It is unsurprising to find social networking websites are interested in the degree to which they can shape human emotions and self-expression. Indeed, individuals regularly go about trying to do this to each other in everyday life. But it’s obviously a different issue when we’re talking about one of the most powerful corporations in the world experimenting with how we feel about ourselves and others.

Of course, the advertising presented on Facebook is already targeted to an individual’s expressed interests and the kinds of topics that they post about, like, share and so on. I suspect that most people already know this. One only has to note how having looked at some pages for upcoming gigs in an area suddenly increases the number of ads you are shown about bands you like. In one case I briefly flirted with the idea of buying a juicer, and then having decided they were too expensive and I was unlikely to really use it, found that Facebook was quite insistent that I’d made the wrong choice and proceeded to show me juicers of various kinds for at least a week afterwards. So people may not be familiar with the technical details or aware of the scale of data involved, but they probably do have the sense that ads are being targeted to them. I think that this is all part of the contemporary neoliberal contract online. We know that we get a plethora of information and can make choices about consumption in new ways at the expense of that information being shaped by our previous choices, expressed interests and social relationships. However, the experiment has caused significant consternation online.

I think this tells us that people placed some degree of trust in Facebook not to be manipulating how they feel in a negative way. Indeed, much of the backlash seems to have been about how Facebook could make us feel bad about ourselves and the world around us. Some have pointed out that the research didn’t take ethics terribly seriously, and I agree. Most significantly, people were selected to be part of the trial randomly, and so there is a strong chance that at least some of the ~155,000 people subject to the feeds skewed to negative statements were vulnerable in one way or another. Seeing a ‘negative’ news story on Facebook can seriously affect my day, particularly if some god-awful politician has said something vainglorious, homophobic, racist, sexist, and so on and so on. Which they do on a practically hourly basis. Fortunately, I’m reasonably stable as regards my emotions but some people may not be and the study should have at least reflected on the importance of this.

But why do we trust Facebook with our emotions at all? I think part of it has to do with what we’re already willing to allow companies to do and what we expect them not to be doing. Most of us notice that the newsfeed changes when you look at it repeatedly throughout the day, and so we understand that some kind of computer-based selection process and prioritisation is probably going on. But I think we largely assume that this is based on selecting content that is most interesting to us. So we all end up with cat videos clogging the page. And we’re mostly fine with that. But we clearly have been surprised to find that the algorithm is not only capable of doing more than this but that it has actively been used to test out how changes to our newsfeeds might affect us personally. In this regard, we have balked not at Facebook trying to alter the items we see on our newsfeeds but at their actively trying to alter how this makes us feel about ourselves. And, as I’ve said, it is the negative side of this manipulation that has caused most distress.

The ostensible neoliberal contract with advertisers is that they make us feel good about ourselves in order to try and get us to buy stuff. Of course, we recognise that in some ways advertising is bad for us, particularly when it comes to body image. But more often than not, and particularly with video advertisements, the message is about how wonderful we are, how great the world is, how brilliant this product is, and how we should buy it to make sure we keep feeling so wonderful all the time. People are savvy about this and we can generally see how advertisers are trying to manipulate us. For example, there’s nothing hidden in how Fosters tries to appeal to contemporary notions of masculinity and male comradery. But we don’t expect Fosters to try and make us feel shit about ourselves in order to sell us beer. That’s not on. And in this regard, Facebook broke the deal and now we’re angry about it. So the answer certainly seems to be that yes, changing our newsfeeds can change our emotions, but perhaps not in the way they expected. The backlash against Facebook offers an excellent case from which to begin to unpick how we understand our emotional lives and how this figures in the organisation of contemporary economic exchange.