Three Questions I get Asked

Here are three questions that natural scientists and engineers often ask me, and which are commonly asked of social scientists participating in interdisciplinary collaborations. 

  1. Why do people not like X?

e.g. why do people not like synthetic food additives?

This question usually has to do with a perception that scientists, or a particular field of scientific work, is not viewed favourably by ‘the public’ or by industry, governments, NGOs, and so on. Natural scientists and engineers sometimes have the impression that social scientists will be able to explain why it is that their technical ambitions have been thwarted by a political misconception or misunderstanding.

The question is sometimes embedded in a range of other faulty assumptions about public understanding of science and science communication. As a question, it can position social scientists as experts in the irrational behaviour of individuals and groups. Social scientists can also be positioned through this style of questioning as brokers, and so be expected to help to open governance and/or public doors.

I’d rather talk about:

What do people actually say and do in relation to X? What practices are involved in regards to X and how do you envision X changing, supplementing or supplanting those practices?

When it comes to something like food additives, for example, I’d be interested to know how people make sense of ‘synthetic’ and ‘natural’ from within cooking, eating, feeding and caring practices. We could ask in what ways are these concepts important to the organisation of such practices, or in what ways do these practices organise our demarcations of those concepts? It could be important to explore how novel methods of producing food additives sit alongside or displace existing methods of production, and with what global socioeconomic implications.

In this regard we could begin to understand why the notion or production of synthetic food additives might raise socio-political, economic or ethical questions from publics, NGOs, governments and so forth, rather than assuming people don’t like it because of ignorance.

 

  1. What will people think of X?

e.g. what will people think of brain-based lie detection?

This type of question usually has to do with natural scientists’ worries about what ‘the public’ will think about their planned innovations or technical recommendations. It comes from a recognition that publics are interested and invested in scientific knowledge production. However, it is often motivated by a desire to ensure that people will think positively about X and so sometimes accompanied by a will to understand how to encourage people to feel positively about X.

The question is sometimes embedded in a range of other faulty assumptions about how people’s negative feelings about proposed technologies will inexorably lead to market failure and that this is necessarily a bad thing. It positions social scientists as PR gurus or pollsters, who can help to take the public temperature and design marketing strategies for innovations.

I’d rather talk about:

What kinds of imagined (necessary, likely, possible or peripheral) actions, routines, relations and social structures are being embedded in the sociotechnical work being conducted? In other words, putting the emphasis not so much on the object of technical interest but on how the envisaged object might impact upon existing practices of life, relations and social order, or open-up new practices. Do we want these kinds of practices and why/ why not? What effects will these changes have and with what implications for people’s experiences of social and technical phenomena?

In the given example of brain-based lie detection, I would want to know more about how we do lie detection now in different contexts and why we do it that way. I’d be interested to know how brain-based techniques would change these contexts if implemented and what implications such changes might have for our ways of living with each other. If we were talking about using brain scanning as evidence for use in legal proceedings, for example, we’d have to think carefully about why we currently use the investigation, interview, interrogation and jury systems. How would brain-based technologies fit in these practices and what would change? What kinds of life and forms of justice are implicated in such changes?

In this regard we could begin to unpick some of the tangle of concepts, practices, norms, politics and so on that are bound up with our current ways of doing lie detection and thus better understand what would be at stake for someone who is asked to give an opinion on a new lie detection technology.

 

  1. Is it okay to do X?

e.g. is it okay to engineer life?

This question usually has to do with a perceived ethical ‘implication’ of some proposed technical innovation. The questions often centre on technical objects. They involve a recognition that sociotechnical innovation generally implies changes in how things are done or understood. They might have to do with abstract concepts like life or nature. However, by emphasising the objects of technical innovation or abstract questions these kinds of concerns largely miss the everyday practices that are at the heart of how ethical decisions and dispositions are made and formed.

This type of question is sometimes embedded in a range of assumptions about scientific objectivity and how ethical implications arise only from the implementation of knowledge and new technologies in the world rather than in the practices of knowledge production itself. In addition, such questions often come with the implication that X is going to happen anyway, but it would be good to know what moral status it is going to be given when it does happen.

This style of questioning is more comfortable for some social scientists than others, since some of us are experts in ethics. However, in the way the question is generally posed it positions social scientists as ethical arbiters, who themselves are being asked to judge the moral status of objects and so assess the social value of proposed innovations in order to help scientists justify actions they know they are going to take. This is a bit tricky and can be a far less comfortable space to inhabit.

I’d rather talk about:

What kinds of moral or ethical values are embedded in the scientific practices out of which the question has emerged? In other words, what has been decided about ethics already, what kinds of questions have been closed off and with what justification?

I’d also be looking to explore what kinds of ethics and norms are used in contexts in which the proposed X is being invoked. Are there differences in the ethical frameworks used to think about X across different spaces and times and in what ways do these differ? If there are differences of opinion about the ethical valence of X how do we decide amongst such opinions in governance, regulation, and technical innovation practices?  

Scopolamine, Truth-telling and the Other Dr House

Portrait of Daniel Defoe

The reasoning that recording physical correlates can be used to discern truth and deception can be traced at least as far back as Daniel Defoe’s 1730 essay on the prevention of street crime. He wrote: “Guilt carries fear always about with it; there is a tremor in the blood of the thief.” Defoe advocated holding the wrists and measuring the pulse to detect a person in possession of false tongue.

As Geoffrey Bunn has recently argued, a range of important concepts emerged in the genesis of criminology, not least the notion of ‘criminal man’, whose animalistic, biological nature was the source of his criminal behaviour. So in the late 1800s and early 1900s the body was increasingly tied to the mind and the various inscriptions produced from reading the body became vital to theorising mental and emotional events. Similarly, the theorisation of the unconscious as a quasi-spatial repository of personal truths made the mind a focus for physiological study. Moreover, biologists of the time investigating heredity imagined memory to be material, conceiving of it as a vibration of cells in parents, which were then transmitted to offspring. This helped them account for the transmission of ostensibly non-physical qualities that nonetheless seemed to be transferred from parents to offspring.

These and a great many more small changes in the discourse of crime and the body helped to consolidate the idea that some technique or technology could be used to access the internal state of a criminal suspect resistant to interrogation. Practitioners of applied psychology, developing their work most fervently from the 1870s onwards, produced a central set of technologies that examined psychic states by monitoring physiological changes. The years from 1870 to 1940 thus saw the development of numerous lie detection devices such as truth serum, sphygmomanometers, pneumographs and the galvanic skin response monitors, some of which were consolidated into the ‘polygraph’ machine, patented several times from the 1930s onwards, and now used throughout the USA to police a range of suspect categories.From good historical work we now know quite a lot about the history of the polygraph machine, particularly regarding its early years of development and deployment in the USA. However, we know a lot less about the emergence and use of truth serum.

Dr Robert House, administering his “truth serum” drug to an arrested man in a Texas jail.
House administers the serum in Texas

In the 1920s a nightshade-derived drug, scopolamine hydrobromide, was trialled by one Robert House, a Texas obstetrician, for use in the interrogation of two prisoners at the Dallas county jail. Dr House had observed the effects of scopolamine on women during childbirth, alongside morphine and chloroform. This drug-induced state became known as ‘twilight sleep’, and was caused by blocking the action of the neurotransmitter acetylcholine. House felt that the drug’s effects on women might be similarly produced in people suspected of concealment. The two prisoners interviewed by Dr House retained their original story indicating to the Dallas physician that they were innocent. The evidence was submitted and the prisoners were found not guilty at their trial. The use of scopolamine as a ‘truth serum’, a term coined by the media and eventually adopted by House, was short lived, mostly due to its dangerous side effects, and though it found brief use in the legal field it was generally unsuccessful.

Dr Gregory House and his slogan, ‘Everybody lies’.

Its popularity resided mostly within the media and was propelled not solely, but incessantly, by House. Much like the proponents of the polygraph, House believed that the truth serum would not only act on individuals to produce justice but on institutions also. He feared that the corruption of powerful members of society, both public and private, had reached severe levels, most dangerously so within the criminal justice system. At the time, aggressive interrogation methods had become endemic in US police investigations, a practice that became known as ‘the third degree’. The doctor saw his serum as the antidote to this social ill. As the more popularly known Dr Greg House from the US TV show says, ‘Everybody lies’. Indeed, the TV character of House seems to be a modern inheritor of the historical House’s cause for lie detection techniques. in the TV show, House regularly calls his patients on their deception and prevarications, and in a few episodes uses the hospital’s fMRI machine to scan their brains and determine whether they’re telling the truth or not.

However, the Dr Robert House’s hopes for a truth serum, that might act like a societal vaccine were never made real: he died in 1930 and the use of scopolamine as a truth drug mostly died with him. It was around this time that the ‘inventors’ of the polygraph were pushing their devices as cures for the corruption of police investigation practices. So although the idea for using chemical compounds in the interrogation of suspects survived, becoming known as ‘narco-analysis’ it never really competed with the rise of the polygraph machine. One important context in which it did endure, however was as part of the programme of human behavioural modification explored by the CIA in Project MKUltra, about which I’ll talk more in a future post. For now, here are some references that might be of interest:

Robert House, The Use of Scopolamine in Criminology

Geoffrey Bunn, The Truth Machine

Alison Winter, The Making of “Truth Serum” 1920-1940

Melissa Littlefield, The Lying Brain

Gary James Smith v. State of Maryland

fMRI lie detection evidence, supplied by the company No Lie MRI, has been considered during pre-trial criminal hearings in the USA, this time in the case of Gary James Smith v. State of Maryland [1, 2]. The device has already had a couple of hearings as potential evidence, with tests conducted by rival company Cephos, first in New York [1, 2] in 2010 and then again in Tennessee [1, 2, 3, 4] later that year.

Gary Smith is accused of shooting Mike McQueen in 2006 and is about to go to trial for the second time, after the Court of Special Appeals affirmed the verdict of the trial court, and then the Court of Appeal reversed and ordered a retrial. The first verdict, which had found him guilty of second degree murder (or ‘depraved heart murder’) was overturned on the basis that the trial court had admitted prosecution evidence of the decedent’s ‘normal’ state of mind, but hadn’t allowed evidence to the contrary.

The case is an interesting and complex one, particularly as regards medical and scientific evidence. For a start, both Smith and McQueen worked as Army Rangers and served in the ongoing conflicts in Afghanistan and Iraq. As such, both parties, the defense and the prosecution, claim that post-traumatic stress is, in part, to blame for the tragedy. In the first trial, the prosecution suggested that Smith’s PTSD may have left him unstable, which may have contributed to him murdering McQueen, whilst the defense argued that McQueen’s PTSD led him to commit suicide. The case thus quickly became a focal point for a still ongoing debate over the hidden psychological and medical costs of the war on terror. Furthermore, two experts appointed by the court were divided over the blood spatter evidence, with one claiming it suggested suicide and another that it pointed towards murder.

Moreover, as part of his pre-trial hearing with Judge Eric M. Johnson, of Montgomery County Circuit Court, Smith recently sought admissibility for fMRI evidence of the veracity of his claim that he did not shoot and kill McQueen. The evidence, though the judge found it ‘fascinating’, was excluded. The decision appears to have been founded on the current lack of evidence that the device works as a lie detector, said Judge Johnson: “There’s no quantitative analysis of this procedure available yet.” In contrast, Joel Huizenga, CEO of No Lie MRI, said: “There is always room to do more research in anything, the brain’s a complex place. There have been 25 original peer reviewed scientific journal articles, all of them say that the technology works, none of them say that the technology doesn’t work…that’s 100 percent agreement.”

It shouldn’t be a surprise that community opinion has again been influential in determining admissibility of scientific evidence regarding veracity, it has long been so, particularly since the long-established Frye ‘general acceptance’ rule was decided on the same basis in the case of the exclusion of the polygraph, nearly one hundred years ago. However, proponents of fMRI lie detection, such as Steven Laken, CEO of Cephos, have argued that lie detectors are held to a higher standard than are other forms of scientific evidence: “But the standard is set even higher for these lie detectors because of this idea that the judge or the jury are basically the final determiners of whether someone is lying on the stand…The courts are unfairly putting a higher bar on that than they are on other scientific evidence like DNA.”

Expert testimony, like that of the gun splatter expert, gets enmeshed in legal talk in unpredictable ways. Various technologies are enrolled to understand the significance and reliability of new techniques under consideration. Indeed, the polygraph – for instance – has often been compared to fingerprinting or DNA evidence when deciding on its admissibility. The difficulty that those supporting fMRI lie detection face is in seeking to make it amenable to a complex system in which notions of responsibility, guilt, lying and truth are constantly at play between rules, precedents, expert evidence and legal talk during trials. Take, for instance, the following quote from the closing statement of the prosecution in the first trial of Gary Smith:

“It’s been 18 months since Michael McQueen was buried.  This defendant shot him.  It’s time for justice.  Healing begins when justice occurs. And the only just verdict in this case, the only proper verdict in this case is to hold that person responsible for [what] the physical evidence shows, no matter what experts you want to believe, that he was right there when he was shot.  What your common sense and understanding shows [is] that you don’t stage a scene, you don’t throw away a gun, you don’t lie and lie and lie and lie to the police, unless you’re guilty as sin.”

This decision seems, on the face of it, to be a further defeat for the corporations seeking to admit fMRI lie detection evidence. However, advocates of exclusion, such as the eminent law professor and scientifically well-informed Hank Greely of Stanford Law School, whose testimony has been influential in these early cases, might be wise not to rest on their laurels. There are likely to be more, and perhaps more significant, spaces in which the battle over fMRI lie detection will be fought. Importantly, however, these are not as easy to pin down as the criminal trial courts. Lessons from the history of the polygraph regarding its entanglement with governance teach us that exclusion from criminal trial courts is far from the end of the story as goes lie detection.

The rhetorical construction of the polygraph as the lie detector contributed to its being adopted in a wide range of spaces in the USA. For example, the polygraph continues to be used in the context of employee screening, espionage, police interrogations and private investigation. Or take one further example: paedophilia. In a week or so I’ll be reporting on the use of the polygraph machine in the context of sex and criminal sexual behaviour, where it is now commonly used in the USA to manage paedophiles during post-conviction probation. Such use of the polygraph was recently trialled in several UK regions and looks set to be taken-up more broadly. As such, even though the device was barred from criminal courts in Frye, the polygraph has since been used in very particular, but also very important, legal spaces close, but just outside of the trial.

fMRI appears to be going the same way. The No Lie MRI website boasts that: “The technology used by No Lie MRI represents the first and only direct measure of truth verification and lie detection in human history!” This rhetoric of direct measurement has been a key one in the articulation of the fMRI machine’s potency for lie detection. Constituting the brain as the location of truth and thus of lying, the fMRI researchers have frequently claimed that we are looking directly into the lie. This has been used to help position the fMRI machine as the natural successor to the polygraph by positing it as a technological improvement on the polygraph’s indirect measurement. No Lie MRI claims: “lie detection has an extremely broad application base. From individuals to corporations to governments, trust is a critical component of our ability to peacefully and meaningfully coexist with other persons, businesses, and governments.”

Irrespective of whether the technique ‘works’, more attention should be paid to the complex way in which lie detection evidence is negotiated in relation to medical diagnoses, like PTSD, and other technologies such as blood spatter, DNA or the polygraph. Moreover, we have to better understand how lie detection techniques have dispersed into the US (and now into other countries, such as here in the UK), how they are used and what their consequences are, in order to better respond to fMRI’s emergence as a lie detector. Otherwise, fMRI may be excluded from criminal trials in much the same way as the polygraph but still find use in a variety of significant social, legal and political spaces that are far more difficult to control. The fMRI machine looks new and shiny but as regards lie detection, it might all be a little bit of history repeating.