The Rise and Fall of Post-Truth Politics

Young people have voted for an end to post-truth politics. To do them justice we must now bring about a broader, cross-party commitment to a more authentic political discussion.

In 2016 Oxford Dictionaries selected their annual ‘Word of the Year’ for the way in which it captured the ethos, mood, and preoccupations of those 365 days. The word was ‘post-truth.’  It was the year in which the obvious lies told by the Leave and (to a lesser extent) the Remain campaigns resulted in a narrow win for a Brexit that was never supposed to happen. This was followed swiftly by Trump’s unprecedented though marginal election on a platform of utter bullshit and recalcitrant nationalism. The result of these events was a significant increase in usage of the term post-truth (by over 2000% in 2016 compared to 2015) and a deluge of journalists’ reflections on the status of politics and of their profession.

In making their selection, the OD team explained the significance of the ‘post’ in post-truth:

The compound word post-truth exemplifies an expansion in the meaning of the prefix post- that has become increasingly prominent in recent years. Rather than simply referring to the time after a specified situation or event – as in post-war or post-match – the prefix in post-truth has a meaning more like ‘belonging to a time in which the specified concept has become unimportant or irrelevant’.

To many it seemed that truth had become irrelevant only recently. They argued for a renewed commitment to facts and a return to some kind of political sphere which it seemed we had lost. However, the monsters of post-truth were not newly arrived at our doors, but had been living amongst us for some time. The key themes of post-truth society seemed to be an increased emphasis on emotion over reason in politics, a decreased importance and often outright suspicion of technocracy, and the banalization of bullshit. However, these were not new phenomena but rather new emphases in long-standing patterns, which were – by most commentators – badly articulated or completely ignored.

For example, emotion has long been important in politics. What I feel was the real problem with the emotions that have been characterized as post-truth feeling is that they were the wrong sort of emotions: fear, anger, hatred and selfishness. The journalists, scientists and other professionals who called-out the media and politicians for their post-truth practices were not so much troubled by the new entrance of feeling into politics – though they often claimed to be shocked by its rise – but rather by the forms which it took and how it had been deliberately channeled by parties and people that should know better. It smacked of fascism and we were rightly troubled by it. Emotion, then, was not the problem, but rather how it was being used.

Second, the suspicion of experts and technocracy has been long-coming. It was at the heart of campaigns against Europe in the first referendum (in which we voted to join the EU) and in all the subsequent scaremongering by the right-wing press and far-right political groups: think curvy bananas and Brussels bureaucrats. Powerfully manipulated by the Mail, Sun and so on in the run-up to Brexit, and then embodied by Hilary Clinton, it was no surprise that technocratic expertise played a starring role in these startling events. It was bewildering to those who were the very subjects of the criticism, for they could not recognize in themselves the claims being made about them.

And this leads to the third issue: lies, spin and bullshit, and the differences between each of these terms. Whereas lies involve the hiding of a truth and the production of a deliberate falsehood, bullshit, best characterized by the endless, stinking output of Trump, is actually a complete disinterest in truth. Spin is also concerned with the truth, either in softening hard and inconvenient truths or in elevating irrelevant truths. At its worse, spin turns into bullshit. Whereas liars and spinners often know the truth (or at least what they think is true) bullshitters care only that what they’re saying serves their present purposes, regardless of whether it is true or not, and do not care to know the facts. Spin and lying has been around in politics for quite some time, and bullshit has been quietly on the rise. What we’ve seen probably is a rapid increase in the amount of bullshit in politics and the media, but more importantly, we have seen the bullshit – if you forgive the imagery – swallowed. This is because the rise of post-truth has coincided with a yearning for authenticity. In the realm of politics and the media in which every truth is spun and lies are told so frequently that no one can be trusted, what people had found in Trump was a feeling of honesty often in the face of factual incorrectness. The expert bullshitter performs truthfulness in its complete absence and so Trump was well-placed to take advantage of a growing resentment of the features of post-truth society. He was not the cause of post-truth, merely a result of it.

So what I think made ‘post-truth’ such a popular term in 2016 was not the sudden emergence of a distinctly modern phenomenon nor its accuracy of description but rather the implicit feeling that something needed to be done about the causes of a pattern of events which had defied expectations. It was not so much that the problems were new, but that they had been exacerbated and their effects had become wildly unpredictable. It was this latter issue that was most intolerable.

In some regards, Corbyn’s significant gains in seats and percentage share of the vote in this week’s election marks a continuation of the pattern. It was widely predicted to go the other way, for Theresa May had called an election at the peak of her polling popularity under the assumption – and near promise – of a landslide. So the loss of seats and majority thus went against expectations of political experts, upended journalistic speculation and challenged the designs of our corporate masters in the media conglomerates.

But more profound than its fit with this sequence of events was its anomalousness: Corbyn’s campaign was a direct assault on the politics of post-truth society. Here is a man who failed to speak in soundbites, refused to engage in fearmongering, challenged the media at every turn rather than spin things for them, spoke authentically about his beliefs and ran a campaign that was in line with those ideals rather than electoral dogma. He did not win the election but he won the battle for truth and authenticity.

As the expert analysis begins it is clear that a key part of his success has been to animate the youngest generation of voters and – more importantly – get them to turn-out and vote on the day. Tireless work from others in the party, campaigners and volunteers has galvanised the young vote and – dare I say it – begun the transformation of social media from a bubble which only exacerbated post-truth politics into the beginnings of what could be a long-term, politically-aspirational social movement.

Young people voted for policies which offered hope against austerity, against the crippling effects of an economy skewed towards a wealthy elite who caused (and then ignored) the problems they promised to fix. But they also voted for a new kind of politics. The prefix of ‘post-truth’ might lead some to claim that we are witnessing a return to truth, for this is the commitment that most scientists and political experts have called for. But there is not a truthful politics to which we can return. What we need is a new way of conducting political discussion, of reporting on news and of holding the press to account, and of getting people to vote for policies and principles, not pandering and propaganda. Young people have voted for an end to post-truth politics, we must now bring about a broader, cross-party commitment to a more authentic political discussion that embraces something new, and does not fall for the false hope of a return to something which we never had. Hope has been a deciding factor but let us hope for the future and not for the false comfort of an unsalvageable past.



Dementia, Sexuality and the Brain

Dementia, Sexuality and the Brain

I am currently recruiting for a fully-funded ESRC CASE 1+3 MSc. & PhD studentship in partnership with Manchester Carers Forum. The studentship is available to outstanding candidates wishing to commence their MSc. in September 2017 before moving onto the PhD studentship in September 2018. You will be based in the University of Manchester’s Morgan Centre for the Study of Everyday Lives, which is a world-leading institute for the study of personal life and for the development of creative qualitative methods. This is a great opportunity for a talented student to become part of a thriving research community, in one of the UK’s largest and most successful sociology groups.

So why this project? In recent decades dementia has grown in significance as a health condition, brought about by an ageing population, and presents challenges for understanding the complex changes which occur in the lives of people with dementia. For example, some people with dementia experience changes in sexuality and sexual activity. Such changes are characteristic of one form of dementia in particular, ‘behavioural variant frontotemporal dementia’ (bvFTD), and form one of the possible diagnostic criteria for this condition. This doesn’t always mean increases in sexual activity, sometimes it can be a reduction, as Robin describes in this video about her life with her husband. Robin’s video is a powerful example of a carer negotiating her experiences of changes in romance and love.

These kinds of changes are often explained to carers and people with dementia, whether in the medical context of diagnosis and treatment, or in the support literature provided by charities, as having been caused by chemical and structural differences in the brains of people with dementia. Of course, dementia does have important effects on the brain and it remains a terminal illness. So it is bound to cause changes in people’s capacities. However, our everyday understandings of love, sex and sexual identity do not always align with neurological explanations. This project explores how people with dementia, their carers, family and partners, make sense of changes in sexual and romantic lives by reference to the brain, or not. It examines the consequences for carers and people with dementia of explaining more of their lives through neurological evidence and ideas.

The project will use the personal life approach in sociology. This means understanding changes in the lives of carers and people with dementia as being fundamentally entangled phenomena, and exploring the ways in which the meaningfulness of everyday life is negotiated through interaction. The project will use creative qualitative methods to examine these issues. For more information on the kinds of methods the Morgan Centre works with, see some of our research projects. For example, you can see some of the sketches that Lynne Chapman has been doing as part of various research projects in the Centre, here, and in the picture below, which Lynne made as part of her work with me on my ‘Facets of Dementia’ project.


As an ESRC CASE studentship, the PhD scholarship will also involve close work with Manchester Carers Forum. The successful candidate will volunteer at the Carers Forum as a member of their team, working directly to support carers of people living with dementia. The PhD student will work there for 3 months of the year, broken down into a certain number of hours per week. S/he will also produce materials which are of use to the Carers Forum as an impact of the project, meaning that the PhD research will help to support carers in negotiating changes in sexuality, sex and romance.

Studentship Details: The successful candidate will be supervised by Dr Andrew Balmer and Prof. Brian Heaphy in the department of Sociology. This ESRC CASE 1+3 studentship will cover tuition fees for the 1-year MSc. Sociology and the 3-year PhD Sociology courses at the University of Manchester. It will also pay a stipend during these four years of approximately £14,057 per annum. Continuation of the award is subject to satisfactory performance.

Entry Requirements: Applicants must hold a Bachelors First Class (or in exceptional cases an Upper Second Class Honours) UK degree in Sociology (or a closely allied discipline such as Anthropology). Degrees in Psychology and Health Sciences will not be considered acceptable. The successful student will register first for the ESRC-recognised MSc. Sociology course before proceeding on to the PhD course. You must satisfy ESRC UK residential criteria to qualify for this studentship (see page 4 of the document here.)

Candidates meeting the following criteria will generally be given preference: above 70% in their Bachelors; some demonstrable knowledge of the sociological literature on sexuality; demonstrable interest in qualitative research methods, and the ‘relational’ approach to sociology.

How to apply

Applicants should email Dr Andrew Balmer, with:

1) a full CV, (including most up-to-date grade transcripts) and;

2) a covering letter explaining why you think the project is interesting and how you are qualified to conduct it.

Please note that applying for this PhD studentship funding is a separate process to applying for entry to the Manchester PhD programme.  The successful candidate will therefore also be required to fulfil the normal admissions criteria for the School of Social Sciences once they have been offered the NWSSDTP studentship.

The deadline for applications is 7th April 2017.

The project is a great opportunity not only to develop academic, research, writing and presentations skills through a PhD programme, but also to work with a charity for three to four years. We will be looking for someone who can produce a sophisticated, theoretically-informed, qualitative PhD, with interests which align with those of the Morgan Centre.

If you need some more information feel free to get in touch with me at:

The Experiment with Lie Detection Should Be Ended

The polygraph machine – or ‘lie detector’ – has long been tied-up with sex and sexuality, from the use of the device to out homosexuals during McCarthyist witch-hunts to the recent use of polygraphs to monitor convicted sex offenders. Reports of the success of this programme warrant scepticism and careful analysis, not least because the machine doesn’t detect lies, but also because the history of polygraphy tells us that it is a slippery slope from using it in one area to its spread into all realms of social life.

The majority of psychologists will tell you that the polygraph machine does not work – and they are right. It doesn’t detect lies. Nonetheless, in the past century the machine has been used in a variety of situations in the United States, including for job applicant screening, police investigations, in agreeing divorce settlements, or for resolving family and business disputes. Its proponents have sold it more on a political promise than on scientific credentials, as a tool in the fight against crime, to challenge police corruption or to combat terrorism. Sex offenders are just the most recent group to play a role in these scientific and political fictions.

In the UK we have thus far been more circumspect regarding lie detection, and thanks to this scepticism the polygraph does not have the kind of mythic quality that it seems to engender in American hearts. However, there are a few areas in which examinations have crept in, chief amongst them being their use to monitor sex offenders post-probation. After changes to the Offender Management Act in 2007, trials were run and eventually polygraphy was rolled-out across the UK. It is now reported that through these routine examinations 63 of 492 convicted sex offenders have been returned to prison after they admitted to breaches of their probation conditions.

On the face of it, of course, this is a good thing, for nobody could deny that sex offenders who breach their parole conditions should be returned to prison. However, there are serious concerns that should be raised about the efficacy of this technique and its longer-term consequences.

The chief problem is the kind of evidence the polygraph produces. The thrust of the research into polygraphing sex offenders aims to assess how far the machine goes to elicit so-called ‘clinically significant disclosures’. These are effectively confessions of one kind or another, and are typically used to evaluate the offender’s riskiness, change treatment strategies or alter probation conditions.

In other words, the validity of the polygraph is seen to rest not so much on whether it can detect lies but whether it can get offenders to make more disclosures about their behaviour. However, reports from the programme’s trial show that most disclosures are not made during the actual test but in the pre-test or post-test interviews. The polygraph’s role is less lie detector and more threat, used to induce disclosures. In this way, is works only so long as offenders believe that it works and any disclosure made is treated as evidence that offenders are being more honest.

This is not a good basis for making crucial legal decisions: sex offenders returned to prison are likely to learn from each other that the machine does not work and develop ways to cheat the test. There are courses one can take and books one can read about beating the machine. The fact is that the use of the polygraph in this context, as in others, is based on a game of bluff – who can convince whom that they are telling the truth? The examiner, when they say that the machine is infallible, or the examinee, when they say that they have not lied?

It is distinctly unwise to rely on such a game of bluff, even if only as an additional method for helping to determine risk. Offenders will, over time, adjust their strategies for getting through probation and back into the general population. Most sex offenders are already skilled liars and will not buy into the lie of the lie detector for very long.

It is also important to take into account the situation in which this technology has taken root. Lie detection was first considered in the UK at the peak of a media-provoked cycle of one-upmanship between Labour and the Conservative Party over who could be toughest on crime, a contest which has escalated over the last two decades and has always been most vociferous when it has focused on paedophiles. As former Justice Minister for the Coalition government, Jeremy Wright said of the implementation of the programme: “this will give us one of the toughest approaches in the world to managing this group.” Toughness cannot be an end in itself. This is not the kind of politics which we need in such a sensitive and complex situation – where the prevention of crimes against children is at stake cooler heads must prevail. Lie detection breeds confidence in probation decisions where no confidence is justified.

We have to be careful not to let the use of the polygraph in one context help it spread to another. In the USA the spread of polygraph exams began in the 1940s and today millions of exams are conducted every year by thousands upon thousands of examiners in many different situations. In some cases, it was the use of the lie detector by government departments like the CIA, FBI or Department of Energy, which helped justify its spread. Trial courts heard that since the machine was trusted in the most sensitive parts of government security, it should also be used as evidence of guilt or innocence. The machine is now used in criminal justice systems far more regularly than is realized by most observers. And, in the police interrogation room, there are more than a few cases in which the polygraph exam has been directly responsible for the production of false confessions.

We must guard against the use of lie detection in the UK, no matter the context, but especially in those situations in which the risks are so high. The crimes of sex offenders warrant surveillance during probation, but the polygraph does not warrant our credence. For it does not work, and sooner or later it will be found within a divorce proceeding, a job interview or a routine matter of airport surveillance, and your heart will be the one that is beating out a rhythm on the examiner’s chart.



A 4 point model for political trolling

… and why no one tends to win.


Role 1: The Troll turned Fish



  1. BAIT – find or happen upon a topic which is emotionally loaded and complex, then turn it into a binary issue.
  2. REEL – when someone tries to acknowledge complexity of the topic, reify the binary and raise the stakes.
  3. CATCH – push someone with repeated REELING until they fall into BAITING you.
  4. Now perform role 2.


Role 2: The Fish turned Troll


  1. BITE – find or happen upon someone presenting an emotionally loaded and complex topic that you care about as a simple binary issue.
  2. HOOKED – appeal to rationality, argument and evidence, and when that fails, resort to exaggeration, exasperation and then insult.
  3. CAUGHT – get so frustrated you want to punish the troll and fall into BAITING them.
  4. Now perform role 1.

Chance and Serendipity

Chance and Serendipity

In sociological research there are moments of chance and serendipity in which something happens that moves a project or one’s thinking into an unexpectedly fruitful direction. An event is witnessed, a phrase heard, a paper read, a person met. Suddenly an idea sparks into being or things are cast in a new light. Such moments of chance are in part a product of the messiness of research and of the way in which the world, so well-studied for so long, can still surprise us.

Andy - Lying survey
Fleeting encounters in the field

In the Morgan Centre we are quite fond of mess and disorderliness and we have been experimenting with different ways of knowing about the world that take more notice of its chaotic and surprising features. For the most part, however, sociological methods tend to emphasise orderliness. They pull things together, search out patterns, organize themes, categorise, classify and compare. You can see this in the design of qualitative and quantitative data analysis tools, for example, which often embed certain frameworks for coding, interrogating and representing data that presume a certain sense of structure and hierarchy. Our published findings also adhere to certain conventions, sometimes borrowed from the natural sciences, so that most journal articles are much the same, at least in terms of presentation of the argument and the data.


When Lynne Chapman, resident artist in the Morgan Centre, first began workshops with us she immediately set about trying to change our relationship to order, patterns and structure. She encouraged us to ‘let go’, take a chance and see what happened when we played with the paints, pens and pencils we had newly acquired. This was difficult for me, since I am not a natural artist and being bad at things is an uncomfortable feeling for most people. When I put pen to paper what I draw does not look like the thing I can see in front of me. Hence, my first forays with the freedom of the blank page produced rather uninspiring results.

But Lynne’s enthusiasm has been unfaltering and we have engaged in a range of different activities designed to make us comfortable with the fact that our representations do not look like the real thing. One example was the use of ‘wrong-hand portraits’ which forced us to abandon any hope of making a realistic representation of our subjects.

Wrong hand 1 min portrait

Eventually this started to have an effect on how I approached painting and sketching and I believe that I am starting to understand a bit more about how an artist like Lynne might observe the world and how they combine skill and serendipity in their engagements with it and representations of it. Sploshing paint about, drawing without looking, combining paint and pen and pencil has ‘freed up my hand’ as Lynne might put it.

Trying to embrace chance in a picture of tulips

The results are much improved. Of course, this is partly due to practise. But it is also due to letting go of certain constraints I had placed on myself as a novice. By learning how to make use of the limited skills that I am developing in combination with the chance afforded by the materials I am using, I have begun to feel unburdened by realism. I’m also trying to steal some of Lynne’s techniques of annotating sketches, using certain pens and pencils, and sketching quickly to try to capture some of the movement in everyday life.


Intellectually, this embrace of chance and serendipity is familiar and reminds me that an important feature of creative methods in sociology is that they are more adept at picking up some of the multi-layered nature of social reality than are standard survey techniques or semi-structured interviews. They too can capture some of the movement of everyday life, the way it doesn’t fit within boundaries, colours outside the lines, and yet holds shape, has some order and consistency.


Synthetic Biology’s Second World

Synthetic Biology’s Second World

Secrecy has long been a part of scientific and innovation practices. Being an ethnographer of laboratories, one occasionally comes up against a barrier of entry to a secret lab or space within a building, protected by intellectual property agreements, military or government contracts. Of course, military science is often conducted in secret, on nuclear, biological or chemical weapons, amongst other things. In his excellent book on ‘Secrecy and Science’, Brian Balmer describes how the Manhattan Project epitomised the way in which scientific secrecy operates at various levels of social organisation:

It was, in fact, an almost unprecedented organisation of not only scientists, but also industry and military. Moreover, a significant feature that accounts for the success of the Manhattan Project is the preoccupation with secrecy at the various sites involved in creating the atomic bomb. Compartmentalisation, telling people information on a strict need-to-know basis, meant only a few people had a complete overview of the project. […] In this manner, efficiency, security, bureaucracy and secrecy all came together at once. (Balmer, 2013: 8)

A poster from the Manhattan Project reminding scientists about the importance of secrecy.

By their nature, it is often the most controversial, risky and ethically dubious research programmes that are conducted in secret, curtained-off from society in order to protect knowledge and technology not only from public scrutiny but also espionage or corporate theft. Thus when we find out that science has been conducted in secret we are generally right to be suspicious, and it should be no surprise that a meeting convened earlier this week, behind closed doors at Harvard, on the prospect of synthesising the human genome, has caused a stir.

Human DNA base pairs

The meeting was convened to discuss the prospects of coordinating a large collaborative venture to follow-up on the Human Genome Project (HGP), that would, over the next decade, seek to construct an entire human genome in a cell line. Currently unfunded but to be prospectively titled ‘HGP-Write: Testing Large Synthetic Genomes in Cells’, it is backed by some of the biggest names in the field.

As the New York Times reports the meeting was invite-only and “The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.” In this regard, it would seem that scientists hosting the meeting wanted for the event to be part of what we could conceptualise – following the sociologist, Georg Simmels’ well-known work on secrecy – as synthetic biology’s ‘second world’. As Simmel argued:

Secrecy secures, so to speak, the possibility of a second world alongside of the obvious world, and the latter is most strenuously affected by the former. Every relationship between two individuals or two groups will be characterized by the ratio of secrecy that is involved in it. Even when one of the parties does not notice the secret factor, yet the attitude of the concealer, and consequently the whole relationship, will be modified by it. (Simmel, 1906: 462)

Synbiophobia phobia poster

A second world for synthetic biology is probably quite appealing to scientists working in the field, a space in which they could run-wild with their ideas without the worry of what a supposedly fearful public might think. Synthetic biologists, for the large part, expect the public will be inappropriately scared of developments in the field. This has led to what Claire Marris (2015) calls ‘synbiophobia phobia’ – the fear that scientists have that the public will fear their work.

Synbiophobia phobia might be at the root of the decision to hold the meeting in private, as the organisers likely anticipated public fear at the potential of creating a human genome from scratch. But, as Simmel’s notion reminds us, no matter whether parties kept in the dark find out about the secrets being kept or not, the effect of secrecy is to change the attitude of the concealer and consequently the whole relationship between scientists and civil society.

DARPA Vector Logo.epsContrary to some scientist’s reactions to the media response to the closed meeting, secrecy in synthetic biology isn’t just a fiction created by newspapers and magazines to whip-up a story. The field does have at least the beginnings of a second world, divorced from public scrutiny, then it is almost certainly going to be tied to the Defense Advanced Research Projects Agency (DARPA), which has had a keen interest in the field since its fledgling years and has invested tens of millions into synthetic biology under the remit of the Biological Technologies Office.

However, speaking to the NYT, George Church, one of the most prominent advocates of synthetic biology and co-organiser of the Harvard meeting, argued that the event had been misconstrued and that the secrecy was actually about protecting a paper currently under review that, if published, would make the ideas for the project publicly-available and thus transparent. But as the invite read, “We intentionally did not invite the media, because we want everyone to speak freely and candidly without concerns about being misquoted or misinterpreted as the discussions evolve.” Whatever the motivation for the closed-doors, invite-only meeting, the effect of concealment might well be the same: it implies that something suspicious is going on.

In this regard, the scientists have shot themselves in the foot. The meeting will worry people, even those who support synthetic biology in general. In fact, one of the most well-known advocates for synthetic biology, Drew Endy, refused to attend and co-authored an open letter criticising the closed meeting. It is only a matter of time until those more critical voices and outright enemies of synthetic biology seize on the secrecy of the meeting as further evidence of untoward ambitions for the field. It would be a mistake, though, to see this as unwarranted fear and ignorance. It has much more to do with the facts of synthetic biology and how it is being developed in relation to corporate interests. As Endy and Zoloth’s (2016: 2) letter argued:

The creation of new human life is one of the last human-associated processes that has not yet been industrialized or fully commodified. It remains an act of faith, joy, and hope. Discussions to synthesize, for the first time, a human genome should not occur in closed rooms.

Two of the common tenets of the emerging frameworks for responsible research and innovation, which has been closely tied to the development of synthetic biology, are the importance of scientific transparency and of deliberative governance processes. The UK Synthetic Biology Roadmap, for example, includes a commitment that the Synthetic Biology Leadership Council should “should provide an exemplar of openness and transparency with two-way stakeholder engagement as a core principle.” (SBRCG, 2012: 32)

Transparancey: More than a window into the lab

But transparency is easier invoked than it is implemented. If scientists are going to take responsible research and innovation seriously, then actually implementing transparency and deliberation is going to be crucial, especially when the choices about such things are immediately within their control, as was the case this week. A second world for synthetic biology might be appealing in principle, but in practice it risks bringing about exactly the kinds of public fears that scientists and engineers worry about.


Balmer, B. (2013) Secrecy and science: A historical sociology of biological and chemical warfare. Surrey: Ashgate.

Marris, C. (2015). The construction of imaginaries of the public as a threat to synthetic biology. Science as Culture24(1), 83-98.

Simmel, G. (1906) The sociology of secrecy and of secret societies. The American Journal of Sociology11(4), 441-498.

SBRCG (2012) A Synthetic Biology Roadmap for the UK,

Three Questions I get Asked

Here are three questions that natural scientists and engineers often ask me, and which are commonly asked of social scientists participating in interdisciplinary collaborations. 

  1. Why do people not like X?

e.g. why do people not like synthetic food additives?

This question usually has to do with a perception that scientists, or a particular field of scientific work, is not viewed favourably by ‘the public’ or by industry, governments, NGOs, and so on. Natural scientists and engineers sometimes have the impression that social scientists will be able to explain why it is that their technical ambitions have been thwarted by a political misconception or misunderstanding.

The question is sometimes embedded in a range of other faulty assumptions about public understanding of science and science communication. As a question, it can position social scientists as experts in the irrational behaviour of individuals and groups. Social scientists can also be positioned through this style of questioning as brokers, and so be expected to help to open governance and/or public doors.

I’d rather talk about:

What do people actually say and do in relation to X? What practices are involved in regards to X and how do you envision X changing, supplementing or supplanting those practices?

When it comes to something like food additives, for example, I’d be interested to know how people make sense of ‘synthetic’ and ‘natural’ from within cooking, eating, feeding and caring practices. We could ask in what ways are these concepts important to the organisation of such practices, or in what ways do these practices organise our demarcations of those concepts? It could be important to explore how novel methods of producing food additives sit alongside or displace existing methods of production, and with what global socioeconomic implications.

In this regard we could begin to understand why the notion or production of synthetic food additives might raise socio-political, economic or ethical questions from publics, NGOs, governments and so forth, rather than assuming people don’t like it because of ignorance.


  1. What will people think of X?

e.g. what will people think of brain-based lie detection?

This type of question usually has to do with natural scientists’ worries about what ‘the public’ will think about their planned innovations or technical recommendations. It comes from a recognition that publics are interested and invested in scientific knowledge production. However, it is often motivated by a desire to ensure that people will think positively about X and so sometimes accompanied by a will to understand how to encourage people to feel positively about X.

The question is sometimes embedded in a range of other faulty assumptions about how people’s negative feelings about proposed technologies will inexorably lead to market failure and that this is necessarily a bad thing. It positions social scientists as PR gurus or pollsters, who can help to take the public temperature and design marketing strategies for innovations.

I’d rather talk about:

What kinds of imagined (necessary, likely, possible or peripheral) actions, routines, relations and social structures are being embedded in the sociotechnical work being conducted? In other words, putting the emphasis not so much on the object of technical interest but on how the envisaged object might impact upon existing practices of life, relations and social order, or open-up new practices. Do we want these kinds of practices and why/ why not? What effects will these changes have and with what implications for people’s experiences of social and technical phenomena?

In the given example of brain-based lie detection, I would want to know more about how we do lie detection now in different contexts and why we do it that way. I’d be interested to know how brain-based techniques would change these contexts if implemented and what implications such changes might have for our ways of living with each other. If we were talking about using brain scanning as evidence for use in legal proceedings, for example, we’d have to think carefully about why we currently use the investigation, interview, interrogation and jury systems. How would brain-based technologies fit in these practices and what would change? What kinds of life and forms of justice are implicated in such changes?

In this regard we could begin to unpick some of the tangle of concepts, practices, norms, politics and so on that are bound up with our current ways of doing lie detection and thus better understand what would be at stake for someone who is asked to give an opinion on a new lie detection technology.


  1. Is it okay to do X?

e.g. is it okay to engineer life?

This question usually has to do with a perceived ethical ‘implication’ of some proposed technical innovation. The questions often centre on technical objects. They involve a recognition that sociotechnical innovation generally implies changes in how things are done or understood. They might have to do with abstract concepts like life or nature. However, by emphasising the objects of technical innovation or abstract questions these kinds of concerns largely miss the everyday practices that are at the heart of how ethical decisions and dispositions are made and formed.

This type of question is sometimes embedded in a range of assumptions about scientific objectivity and how ethical implications arise only from the implementation of knowledge and new technologies in the world rather than in the practices of knowledge production itself. In addition, such questions often come with the implication that X is going to happen anyway, but it would be good to know what moral status it is going to be given when it does happen.

This style of questioning is more comfortable for some social scientists than others, since some of us are experts in ethics. However, in the way the question is generally posed it positions social scientists as ethical arbiters, who themselves are being asked to judge the moral status of objects and so assess the social value of proposed innovations in order to help scientists justify actions they know they are going to take. This is a bit tricky and can be a far less comfortable space to inhabit.

I’d rather talk about:

What kinds of moral or ethical values are embedded in the scientific practices out of which the question has emerged? In other words, what has been decided about ethics already, what kinds of questions have been closed off and with what justification?

I’d also be looking to explore what kinds of ethics and norms are used in contexts in which the proposed X is being invoked. Are there differences in the ethical frameworks used to think about X across different spaces and times and in what ways do these differ? If there are differences of opinion about the ethical valence of X how do we decide amongst such opinions in governance, regulation, and technical innovation practices?