The Experiment with Lie Detection Should Be Ended

The polygraph machine – or ‘lie detector’ – has long been tied-up with sex and sexuality, from the use of the device to out homosexuals during McCarthyist witch-hunts to the recent use of polygraphs to monitor convicted sex offenders. Reports of the success of this programme warrant scepticism and careful analysis, not least because the machine doesn’t detect lies, but also because the history of polygraphy tells us that it is a slippery slope from using it in one area to its spread into all realms of social life.

The majority of psychologists will tell you that the polygraph machine does not work – and they are right. It doesn’t detect lies. Nonetheless, in the past century the machine has been used in a variety of situations in the United States, including for job applicant screening, police investigations, in agreeing divorce settlements, or for resolving family and business disputes. Its proponents have sold it more on a political promise than on scientific credentials, as a tool in the fight against crime, to challenge police corruption or to combat terrorism. Sex offenders are just the most recent group to play a role in these scientific and political fictions.

In the UK we have thus far been more circumspect regarding lie detection, and thanks to this scepticism the polygraph does not have the kind of mythic quality that it seems to engender in American hearts. However, there are a few areas in which examinations have crept in, chief amongst them being their use to monitor sex offenders post-probation. After changes to the Offender Management Act in 2007, trials were run and eventually polygraphy was rolled-out across the UK. It is now reported that through these routine examinations 63 of 492 convicted sex offenders have been returned to prison after they admitted to breaches of their probation conditions.

On the face of it, of course, this is a good thing, for nobody could deny that sex offenders who breach their parole conditions should be returned to prison. However, there are serious concerns that should be raised about the efficacy of this technique and its longer-term consequences.

The chief problem is the kind of evidence the polygraph produces. The thrust of the research into polygraphing sex offenders aims to assess how far the machine goes to elicit so-called ‘clinically significant disclosures’. These are effectively confessions of one kind or another, and are typically used to evaluate the offender’s riskiness, change treatment strategies or alter probation conditions.

In other words, the validity of the polygraph is seen to rest not so much on whether it can detect lies but whether it can get offenders to make more disclosures about their behaviour. However, reports from the programme’s trial show that most disclosures are not made during the actual test but in the pre-test or post-test interviews. The polygraph’s role is less lie detector and more threat, used to induce disclosures. In this way, is works only so long as offenders believe that it works and any disclosure made is treated as evidence that offenders are being more honest.

This is not a good basis for making crucial legal decisions: sex offenders returned to prison are likely to learn from each other that the machine does not work and develop ways to cheat the test. There are courses one can take and books one can read about beating the machine. The fact is that the use of the polygraph in this context, as in others, is based on a game of bluff – who can convince whom that they are telling the truth? The examiner, when they say that the machine is infallible, or the examinee, when they say that they have not lied?

It is distinctly unwise to rely on such a game of bluff, even if only as an additional method for helping to determine risk. Offenders will, over time, adjust their strategies for getting through probation and back into the general population. Most sex offenders are already skilled liars and will not buy into the lie of the lie detector for very long.

It is also important to take into account the situation in which this technology has taken root. Lie detection was first considered in the UK at the peak of a media-provoked cycle of one-upmanship between Labour and the Conservative Party over who could be toughest on crime, a contest which has escalated over the last two decades and has always been most vociferous when it has focused on paedophiles. As former Justice Minister for the Coalition government, Jeremy Wright said of the implementation of the programme: “this will give us one of the toughest approaches in the world to managing this group.” Toughness cannot be an end in itself. This is not the kind of politics which we need in such a sensitive and complex situation – where the prevention of crimes against children is at stake cooler heads must prevail. Lie detection breeds confidence in probation decisions where no confidence is justified.

We have to be careful not to let the use of the polygraph in one context help it spread to another. In the USA the spread of polygraph exams began in the 1940s and today millions of exams are conducted every year by thousands upon thousands of examiners in many different situations. In some cases, it was the use of the lie detector by government departments like the CIA, FBI or Department of Energy, which helped justify its spread. Trial courts heard that since the machine was trusted in the most sensitive parts of government security, it should also be used as evidence of guilt or innocence. The machine is now used in criminal justice systems far more regularly than is realized by most observers. And, in the police interrogation room, there are more than a few cases in which the polygraph exam has been directly responsible for the production of false confessions.

We must guard against the use of lie detection in the UK, no matter the context, but especially in those situations in which the risks are so high. The crimes of sex offenders warrant surveillance during probation, but the polygraph does not warrant our credence. For it does not work, and sooner or later it will be found within a divorce proceeding, a job interview or a routine matter of airport surveillance, and your heart will be the one that is beating out a rhythm on the examiner’s chart.



A 4 point model for political trolling

… and why no one tends to win.


Role 1: The Troll turned Fish



  1. BAIT – find or happen upon a topic which is emotionally loaded and complex, then turn it into a binary issue.
  2. REEL – when someone tries to acknowledge complexity of the topic, reify the binary and raise the stakes.
  3. CATCH – push someone with repeated REELING until they fall into BAITING you.
  4. Now perform role 2.


Role 2: The Fish turned Troll


  1. BITE – find or happen upon someone presenting an emotionally loaded and complex topic that you care about as a simple binary issue.
  2. HOOKED – appeal to rationality, argument and evidence, and when that fails, resort to exaggeration, exasperation and then insult.
  3. CAUGHT – get so frustrated you want to punish the troll and fall into BAITING them.
  4. Now perform role 1.

Chance and Serendipity

Chance and Serendipity

In sociological research there are moments of chance and serendipity in which something happens that moves a project or one’s thinking into an unexpectedly fruitful direction. An event is witnessed, a phrase heard, a paper read, a person met. Suddenly an idea sparks into being or things are cast in a new light. Such moments of chance are in part a product of the messiness of research and of the way in which the world, so well-studied for so long, can still surprise us.

Andy - Lying survey
Fleeting encounters in the field

In the Morgan Centre we are quite fond of mess and disorderliness and we have been experimenting with different ways of knowing about the world that take more notice of its chaotic and surprising features. For the most part, however, sociological methods tend to emphasise orderliness. They pull things together, search out patterns, organize themes, categorise, classify and compare. You can see this in the design of qualitative and quantitative data analysis tools, for example, which often embed certain frameworks for coding, interrogating and representing data that presume a certain sense of structure and hierarchy. Our published findings also adhere to certain conventions, sometimes borrowed from the natural sciences, so that most journal articles are much the same, at least in terms of presentation of the argument and the data.


When Lynne Chapman, resident artist in the Morgan Centre, first began workshops with us she immediately set about trying to change our relationship to order, patterns and structure. She encouraged us to ‘let go’, take a chance and see what happened when we played with the paints, pens and pencils we had newly acquired. This was difficult for me, since I am not a natural artist and being bad at things is an uncomfortable feeling for most people. When I put pen to paper what I draw does not look like the thing I can see in front of me. Hence, my first forays with the freedom of the blank page produced rather uninspiring results.

But Lynne’s enthusiasm has been unfaltering and we have engaged in a range of different activities designed to make us comfortable with the fact that our representations do not look like the real thing. One example was the use of ‘wrong-hand portraits’ which forced us to abandon any hope of making a realistic representation of our subjects.

Wrong hand 1 min portrait

Eventually this started to have an effect on how I approached painting and sketching and I believe that I am starting to understand a bit more about how an artist like Lynne might observe the world and how they combine skill and serendipity in their engagements with it and representations of it. Sploshing paint about, drawing without looking, combining paint and pen and pencil has ‘freed up my hand’ as Lynne might put it.

Trying to embrace chance in a picture of tulips

The results are much improved. Of course, this is partly due to practise. But it is also due to letting go of certain constraints I had placed on myself as a novice. By learning how to make use of the limited skills that I am developing in combination with the chance afforded by the materials I am using, I have begun to feel unburdened by realism. I’m also trying to steal some of Lynne’s techniques of annotating sketches, using certain pens and pencils, and sketching quickly to try to capture some of the movement in everyday life.


Intellectually, this embrace of chance and serendipity is familiar and reminds me that an important feature of creative methods in sociology is that they are more adept at picking up some of the multi-layered nature of social reality than are standard survey techniques or semi-structured interviews. They too can capture some of the movement of everyday life, the way it doesn’t fit within boundaries, colours outside the lines, and yet holds shape, has some order and consistency.


Synthetic Biology’s Second World

Synthetic Biology’s Second World

Secrecy has long been a part of scientific and innovation practices. Being an ethnographer of laboratories, one occasionally comes up against a barrier of entry to a secret lab or space within a building, protected by intellectual property agreements, military or government contracts. Of course, military science is often conducted in secret, on nuclear, biological or chemical weapons, amongst other things. In his excellent book on ‘Secrecy and Science’, Brian Balmer describes how the Manhattan Project epitomised the way in which scientific secrecy operates at various levels of social organisation:

It was, in fact, an almost unprecedented organisation of not only scientists, but also industry and military. Moreover, a significant feature that accounts for the success of the Manhattan Project is the preoccupation with secrecy at the various sites involved in creating the atomic bomb. Compartmentalisation, telling people information on a strict need-to-know basis, meant only a few people had a complete overview of the project. […] In this manner, efficiency, security, bureaucracy and secrecy all came together at once. (Balmer, 2013: 8)

A poster from the Manhattan Project reminding scientists about the importance of secrecy.

By their nature, it is often the most controversial, risky and ethically dubious research programmes that are conducted in secret, curtained-off from society in order to protect knowledge and technology not only from public scrutiny but also espionage or corporate theft. Thus when we find out that science has been conducted in secret we are generally right to be suspicious, and it should be no surprise that a meeting convened earlier this week, behind closed doors at Harvard, on the prospect of synthesising the human genome, has caused a stir.

Human DNA base pairs

The meeting was convened to discuss the prospects of coordinating a large collaborative venture to follow-up on the Human Genome Project (HGP), that would, over the next decade, seek to construct an entire human genome in a cell line. Currently unfunded but to be prospectively titled ‘HGP-Write: Testing Large Synthetic Genomes in Cells’, it is backed by some of the biggest names in the field.

As the New York Times reports the meeting was invite-only and “The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.” In this regard, it would seem that scientists hosting the meeting wanted for the event to be part of what we could conceptualise – following the sociologist, Georg Simmels’ well-known work on secrecy – as synthetic biology’s ‘second world’. As Simmel argued:

Secrecy secures, so to speak, the possibility of a second world alongside of the obvious world, and the latter is most strenuously affected by the former. Every relationship between two individuals or two groups will be characterized by the ratio of secrecy that is involved in it. Even when one of the parties does not notice the secret factor, yet the attitude of the concealer, and consequently the whole relationship, will be modified by it. (Simmel, 1906: 462)

Synbiophobia phobia poster

A second world for synthetic biology is probably quite appealing to scientists working in the field, a space in which they could run-wild with their ideas without the worry of what a supposedly fearful public might think. Synthetic biologists, for the large part, expect the public will be inappropriately scared of developments in the field. This has led to what Claire Marris (2015) calls ‘synbiophobia phobia’ – the fear that scientists have that the public will fear their work.

Synbiophobia phobia might be at the root of the decision to hold the meeting in private, as the organisers likely anticipated public fear at the potential of creating a human genome from scratch. But, as Simmel’s notion reminds us, no matter whether parties kept in the dark find out about the secrets being kept or not, the effect of secrecy is to change the attitude of the concealer and consequently the whole relationship between scientists and civil society.

DARPA Vector Logo.epsContrary to some scientist’s reactions to the media response to the closed meeting, secrecy in synthetic biology isn’t just a fiction created by newspapers and magazines to whip-up a story. The field does have at least the beginnings of a second world, divorced from public scrutiny, then it is almost certainly going to be tied to the Defense Advanced Research Projects Agency (DARPA), which has had a keen interest in the field since its fledgling years and has invested tens of millions into synthetic biology under the remit of the Biological Technologies Office.

However, speaking to the NYT, George Church, one of the most prominent advocates of synthetic biology and co-organiser of the Harvard meeting, argued that the event had been misconstrued and that the secrecy was actually about protecting a paper currently under review that, if published, would make the ideas for the project publicly-available and thus transparent. But as the invite read, “We intentionally did not invite the media, because we want everyone to speak freely and candidly without concerns about being misquoted or misinterpreted as the discussions evolve.” Whatever the motivation for the closed-doors, invite-only meeting, the effect of concealment might well be the same: it implies that something suspicious is going on.

In this regard, the scientists have shot themselves in the foot. The meeting will worry people, even those who support synthetic biology in general. In fact, one of the most well-known advocates for synthetic biology, Drew Endy, refused to attend and co-authored an open letter criticising the closed meeting. It is only a matter of time until those more critical voices and outright enemies of synthetic biology seize on the secrecy of the meeting as further evidence of untoward ambitions for the field. It would be a mistake, though, to see this as unwarranted fear and ignorance. It has much more to do with the facts of synthetic biology and how it is being developed in relation to corporate interests. As Endy and Zoloth’s (2016: 2) letter argued:

The creation of new human life is one of the last human-associated processes that has not yet been industrialized or fully commodified. It remains an act of faith, joy, and hope. Discussions to synthesize, for the first time, a human genome should not occur in closed rooms.

Two of the common tenets of the emerging frameworks for responsible research and innovation, which has been closely tied to the development of synthetic biology, are the importance of scientific transparency and of deliberative governance processes. The UK Synthetic Biology Roadmap, for example, includes a commitment that the Synthetic Biology Leadership Council should “should provide an exemplar of openness and transparency with two-way stakeholder engagement as a core principle.” (SBRCG, 2012: 32)

Transparancey: More than a window into the lab

But transparency is easier invoked than it is implemented. If scientists are going to take responsible research and innovation seriously, then actually implementing transparency and deliberation is going to be crucial, especially when the choices about such things are immediately within their control, as was the case this week. A second world for synthetic biology might be appealing in principle, but in practice it risks bringing about exactly the kinds of public fears that scientists and engineers worry about.


Balmer, B. (2013) Secrecy and science: A historical sociology of biological and chemical warfare. Surrey: Ashgate.

Marris, C. (2015). The construction of imaginaries of the public as a threat to synthetic biology. Science as Culture24(1), 83-98.

Simmel, G. (1906) The sociology of secrecy and of secret societies. The American Journal of Sociology11(4), 441-498.

SBRCG (2012) A Synthetic Biology Roadmap for the UK,

Three Questions I get Asked

Here are three questions that natural scientists and engineers often ask me, and which are commonly asked of social scientists participating in interdisciplinary collaborations. 

  1. Why do people not like X?

e.g. why do people not like synthetic food additives?

This question usually has to do with a perception that scientists, or a particular field of scientific work, is not viewed favourably by ‘the public’ or by industry, governments, NGOs, and so on. Natural scientists and engineers sometimes have the impression that social scientists will be able to explain why it is that their technical ambitions have been thwarted by a political misconception or misunderstanding.

The question is sometimes embedded in a range of other faulty assumptions about public understanding of science and science communication. As a question, it can position social scientists as experts in the irrational behaviour of individuals and groups. Social scientists can also be positioned through this style of questioning as brokers, and so be expected to help to open governance and/or public doors.

I’d rather talk about:

What do people actually say and do in relation to X? What practices are involved in regards to X and how do you envision X changing, supplementing or supplanting those practices?

When it comes to something like food additives, for example, I’d be interested to know how people make sense of ‘synthetic’ and ‘natural’ from within cooking, eating, feeding and caring practices. We could ask in what ways are these concepts important to the organisation of such practices, or in what ways do these practices organise our demarcations of those concepts? It could be important to explore how novel methods of producing food additives sit alongside or displace existing methods of production, and with what global socioeconomic implications.

In this regard we could begin to understand why the notion or production of synthetic food additives might raise socio-political, economic or ethical questions from publics, NGOs, governments and so forth, rather than assuming people don’t like it because of ignorance.


  1. What will people think of X?

e.g. what will people think of brain-based lie detection?

This type of question usually has to do with natural scientists’ worries about what ‘the public’ will think about their planned innovations or technical recommendations. It comes from a recognition that publics are interested and invested in scientific knowledge production. However, it is often motivated by a desire to ensure that people will think positively about X and so sometimes accompanied by a will to understand how to encourage people to feel positively about X.

The question is sometimes embedded in a range of other faulty assumptions about how people’s negative feelings about proposed technologies will inexorably lead to market failure and that this is necessarily a bad thing. It positions social scientists as PR gurus or pollsters, who can help to take the public temperature and design marketing strategies for innovations.

I’d rather talk about:

What kinds of imagined (necessary, likely, possible or peripheral) actions, routines, relations and social structures are being embedded in the sociotechnical work being conducted? In other words, putting the emphasis not so much on the object of technical interest but on how the envisaged object might impact upon existing practices of life, relations and social order, or open-up new practices. Do we want these kinds of practices and why/ why not? What effects will these changes have and with what implications for people’s experiences of social and technical phenomena?

In the given example of brain-based lie detection, I would want to know more about how we do lie detection now in different contexts and why we do it that way. I’d be interested to know how brain-based techniques would change these contexts if implemented and what implications such changes might have for our ways of living with each other. If we were talking about using brain scanning as evidence for use in legal proceedings, for example, we’d have to think carefully about why we currently use the investigation, interview, interrogation and jury systems. How would brain-based technologies fit in these practices and what would change? What kinds of life and forms of justice are implicated in such changes?

In this regard we could begin to unpick some of the tangle of concepts, practices, norms, politics and so on that are bound up with our current ways of doing lie detection and thus better understand what would be at stake for someone who is asked to give an opinion on a new lie detection technology.


  1. Is it okay to do X?

e.g. is it okay to engineer life?

This question usually has to do with a perceived ethical ‘implication’ of some proposed technical innovation. The questions often centre on technical objects. They involve a recognition that sociotechnical innovation generally implies changes in how things are done or understood. They might have to do with abstract concepts like life or nature. However, by emphasising the objects of technical innovation or abstract questions these kinds of concerns largely miss the everyday practices that are at the heart of how ethical decisions and dispositions are made and formed.

This type of question is sometimes embedded in a range of assumptions about scientific objectivity and how ethical implications arise only from the implementation of knowledge and new technologies in the world rather than in the practices of knowledge production itself. In addition, such questions often come with the implication that X is going to happen anyway, but it would be good to know what moral status it is going to be given when it does happen.

This style of questioning is more comfortable for some social scientists than others, since some of us are experts in ethics. However, in the way the question is generally posed it positions social scientists as ethical arbiters, who themselves are being asked to judge the moral status of objects and so assess the social value of proposed innovations in order to help scientists justify actions they know they are going to take. This is a bit tricky and can be a far less comfortable space to inhabit.

I’d rather talk about:

What kinds of moral or ethical values are embedded in the scientific practices out of which the question has emerged? In other words, what has been decided about ethics already, what kinds of questions have been closed off and with what justification?

I’d also be looking to explore what kinds of ethics and norms are used in contexts in which the proposed X is being invoked. Are there differences in the ethical frameworks used to think about X across different spaces and times and in what ways do these differ? If there are differences of opinion about the ethical valence of X how do we decide amongst such opinions in governance, regulation, and technical innovation practices?  

Dementia and Everyday Life: Creative Approaches

7th April 2016, University of Manchester

 Understanding dementia and its entanglement with everyday life presents a conceptual and methodological challenge to a range of disciplines in the humanities, health and natural sciences. In this day of academic seminars, we explore some of the work being conducted in humanities and health research to examine this topic, focusing on the creative approaches that are being developed to tackle questions of selfhood, relationality, materiality and narrative.

The event is co-hosted by the Morgan Centre for the Study of Everyday Life, the Dementia and Ageing Research Team and MICRA, the Manchester Institute for Collaborative Research on Ageing.

clock-photo.jpg Photo and art by Lynne Chapman


Time and Location

7th April, 2016. 11am-5pm
Jean McFarlane Building Oxford Road, University of Manchester

Building 92 on the Campus Map

Nearest train stations: Oxford Road and Manchester Piccadilly, both around 10-20 minutes walk from the Jean McFarlane Building. 


Dr Andrea Capstick and Dr Katherine Ludwin, School of Dementia Studies, University of Bradford

Dr Christina Buse, Department of Sociology, University of York

Dr Lucy Burke, Department of English, Manchester Metropolitan University

Dr Jackie Kindell, Specialist Speech and Language Therapist, Older People’s Mental Health Service, Pennine Care NHS Foundation Trust.

Early Career Speakers

We have a few opportunities for early career researchers (including PhD students) to present their work and some limited funding to cover their travel expenses. If you are working on dementia and everyday life phenomena, particularly if you are using creative methods, then please consider putting forward a proposal to speak. We are open to creative suggestions for format, although ECR speakers should bear in mind that their slots will likely be limited to 15-20 minutes. To apply, contact the organisers with a suggested title and abstract of no more than 300 words by 21st March 2016. We will inform successful applicants by 23rd March.


This is a small event and is open to academic researchers working in the field of dementia and everyday life, or related areas.

To request a place at the workshop please email with your name and a sentence or two about your area of research.


Dr Andrew Balmer, Sociology and the Morgan Centre for the Study of Everyday Lives, University of Manchester.

Sarah Campbell, School of Nursing, Midwifery and Social Work, and the Dementia and Ageing Research Team, University of Manchester.




PhD studentship on Dementia and Friendship

My Morgan Centre colleague, Vanessa May, and I have received funding from the North West Doctoral Training Centre for a scholarship for a PhD student to study dementia and friendship, in partnership with Manchester Carers Forum. The information is below, please encourage any prospective students to apply.


The University of Manchester’s Morgan Centre for the Study of Everyday Lives is offering one fully-funded ESRC CASE PhD studentship in partnership with Manchester Carers Forum. The funding includes payment of tuition fees as well as a doctoral stipend at the UK Research Council’s required level of £14,057 per annum. The studentship is available to outstanding candidates wishing to commence their doctoral studies in September 2016.

Project description: In the past decade, friendship has become a concern in sociology as well as in anthropology and related disciplines. This PhD project will examine questions of friendship in the context of dementia, focusing on the experiences of people with dementia and their carers, as well as their friends. A crucial feature of this study will be to take seriously the relational perspective, understanding changes in the lives and friendships of carers and people with dementia as being fundamentally entangled. The project will develop a variety of elicitation and sensory methods (for example photo or music elicitation), as well as more conventional narrative and biographical methods, for the study of friendship and dementia. As an ESRC CASE studentship, this project will also involve close work with Manchester Carers Forum. As part of the funding requirements, the successful candidate will volunteer at Manchester Carers Forum as a member of the peer mentor coordination team, working directly with carers and peer mentors to support people living with dementia. The PhD student will work for 3 months of the year at the Carers Forum, broken down into a certain number of hours per week.

Studentship Details: The successful candidate will be supervised by Dr Andrew Balmer and Dr Vanessa May in the department of Sociology. It is anticipated that the studentship will be for direct entry onto the three year (+3) PhD programme in September, however candidates for the 1+3 (MSc. Sociological Research followed by the 3 year PhD programme) will be considered. Continuation of the award is subject to satisfactory performance.

Entry Requirements: Applicants must hold a Bachelors First Class or Upper Second Class Honours UK degree in a relevant social science discipline, which should generally be in Sociology or Anthropology, although other disciplines will be considered. Candidates must also have (or expect to gain before the start of the programme in September 2016) a UK Masters degree (or overseas equivalent) recognised as a research training masters by the ESRC. They should be qualified at minimum Merit level, with a coursework/examination average of 60% or more. Students without a Masters degree (intending to enter the 1+3 programme) will be considered but preference will be given to those with a Masters qualification. You must satisfy ESRC UK residential criteria to qualify for this studentship (information here:

Candidates meeting the following criteria will also be given preference: above 70% in their Bachelors or Masters; some demonstrable knowledge of the sociological literature on personal life and/or friendship; demonstrable expertise in qualitative research methods, particularly creative methods.
Enquiries should be directed to

Deadline: 7th February 2016. 

How to apply

Applicants should email Dr Andrew Balmer, with a full CV (including grade transcripts) and a covering letter explaining your interest in the project. Please note that applying for this PhD studentship funding is a separate process to applying for entry to the Manchester PhD Programme.  The successful candidate will therefore also be required to fulfil the normal admissions procedures for the School of Social Sciences once they have been offered the NWDTC scholarship.