Affect and Dementia

Affect and Dementia

Some definitions of ‘affect’ hold it to be pre-linguistic, something fundamentally ‘non-representational’ (Massumi, 1995), like the sensation of anxiety conjured by a particular urban environment. Red Road Glasgow

Affect in this guise is automatic. It is something the world conveys upon us as if by magic. The brain and wider nervous system are often crucial to such arguments, since they react to stimuli so rapidly it is easy to see how unconscious some of our responses can appear. In contrast, Margaret Wetherell (2012) understands affect to be entangled with all the rest of the mess of the world. Something that can happen in the blink of an eye, something embodied and habitual, yes, but also something that we designate to ourselves and others as part of situated everyday life. Affect is something that we reflect on, foster or discourage. It is structured and also specific.

In my own work I have been musing on affect and emotions as part of studying a very messy situation: what it is like to be caring for someone with dementia. Primarily I have been interested in how we deal with change in this context. No matter which kind of dementia a person is living with, there will be a lot of change involved, not least in their behaviour, but also in their relationships and in their capacities. website-bvftd-2-imaging-reformatted-1140.1140.409.sOne form of dementia is particularly pertinent to understanding affect and emotion. Behavioural variant frontotemporal dementia (bvFTD) involves a range of symptoms, but central to its manifestation are changes to a person’s affective disposition. People can become disinhibited and lack shame, empathy and insight. They might cry or laugh uncontrollably. Sometimes their tastes change, particularly as regards their appetite for sweet and sugary foods. They can also become obsessional, repeating routines and behaviours without relent.

Pyle_Pro_PTED01_PTED01_Electronic_Table_Top_1290002513000_744392For example, arranged around the living room of a carer I interviewed, Mike, there were several electronic drum kits. However, Mike told me as I we walked in that he doesn’t play the drums. They were for his wife, Lucy, who was living with bvFTD. Lucy would repeatedly bash and drum on anything she could find. Mike had bought the drums because at least they made familiar sounds, had a volume control and weren’t easily demolished.

This led me to ponder on whether Lucy drums things because of a change in her brain. Certainly there is a neurological problem causing a disruption in her everyday life. But why drums? This was a question Mike regularly posed to himself and doctors. Some answers he received were that it might stop some unpleasant sensation that Lucy feels, or that she might derive some pleasure from the physical activity, from the sound or from the effect it has on others. Mike wonders if she’s angry, and says that she doesn’t show any empathy for him as he struggles to tolerate the endless cacophony. And Mike struggles to manage his own anger as he soldiers on. But why do the drums bother him, exactly? It seems that there’s certainly an element of automation here. The drums make an unpleasant noise, which makes him feel angry even against his will. But how do we judge a pleasant noise versus an unpleasant noise? There are physical factors. Some noises hurt our ears. But cultural ones too, having to do with the way in which rhythm and melody has been structured in the West. These physical and cultural factors also inform each other.

And surely Mike is also angry because of the sense of injustice he feels. That dementia has affected Lucy in this way and that she does the things that she does. And that he is losing her. He is still angry with her even though he knows this, which makes him angry with himself, and further angry with the disease and that he can’t do anything about it.

Clearly there are multiple forces shaping the manifestation of anger in Mike and Lucy’s relationship, each time situated, specific and multiple, but also part of a broader story of changing embodiment, capacity and everyday life, one that is at least partly shared with others living with bvFTD and their carers.

Encounters like these with changes in affect lead me to believe that we need to better understand its entanglement with the body and the brain, certainly, but that such an investigation has to be conducted from within the relational world in which these changes take place.


Massumi, B. (1995) ‘The Autonomy of Affect’, Cultural Critique, 31, 83-109.

Wetherell, M. (2012) Affect and Emotion: A New Social Science Understanding (London: Sage Publications).


Gary James Smith v. State of Maryland

fMRI lie detection evidence, supplied by the company No Lie MRI, has been considered during pre-trial criminal hearings in the USA, this time in the case of Gary James Smith v. State of Maryland [1, 2]. The device has already had a couple of hearings as potential evidence, with tests conducted by rival company Cephos, first in New York [1, 2] in 2010 and then again in Tennessee [1, 2, 3, 4] later that year.

Gary Smith is accused of shooting Mike McQueen in 2006 and is about to go to trial for the second time, after the Court of Special Appeals affirmed the verdict of the trial court, and then the Court of Appeal reversed and ordered a retrial. The first verdict, which had found him guilty of second degree murder (or ‘depraved heart murder’) was overturned on the basis that the trial court had admitted prosecution evidence of the decedent’s ‘normal’ state of mind, but hadn’t allowed evidence to the contrary.

The case is an interesting and complex one, particularly as regards medical and scientific evidence. For a start, both Smith and McQueen worked as Army Rangers and served in the ongoing conflicts in Afghanistan and Iraq. As such, both parties, the defense and the prosecution, claim that post-traumatic stress is, in part, to blame for the tragedy. In the first trial, the prosecution suggested that Smith’s PTSD may have left him unstable, which may have contributed to him murdering McQueen, whilst the defense argued that McQueen’s PTSD led him to commit suicide. The case thus quickly became a focal point for a still ongoing debate over the hidden psychological and medical costs of the war on terror. Furthermore, two experts appointed by the court were divided over the blood spatter evidence, with one claiming it suggested suicide and another that it pointed towards murder.

Moreover, as part of his pre-trial hearing with Judge Eric M. Johnson, of Montgomery County Circuit Court, Smith recently sought admissibility for fMRI evidence of the veracity of his claim that he did not shoot and kill McQueen. The evidence, though the judge found it ‘fascinating’, was excluded. The decision appears to have been founded on the current lack of evidence that the device works as a lie detector, said Judge Johnson: “There’s no quantitative analysis of this procedure available yet.” In contrast, Joel Huizenga, CEO of No Lie MRI, said: “There is always room to do more research in anything, the brain’s a complex place. There have been 25 original peer reviewed scientific journal articles, all of them say that the technology works, none of them say that the technology doesn’t work…that’s 100 percent agreement.”

It shouldn’t be a surprise that community opinion has again been influential in determining admissibility of scientific evidence regarding veracity, it has long been so, particularly since the long-established Frye ‘general acceptance’ rule was decided on the same basis in the case of the exclusion of the polygraph, nearly one hundred years ago. However, proponents of fMRI lie detection, such as Steven Laken, CEO of Cephos, have argued that lie detectors are held to a higher standard than are other forms of scientific evidence: “But the standard is set even higher for these lie detectors because of this idea that the judge or the jury are basically the final determiners of whether someone is lying on the stand…The courts are unfairly putting a higher bar on that than they are on other scientific evidence like DNA.”

Expert testimony, like that of the gun splatter expert, gets enmeshed in legal talk in unpredictable ways. Various technologies are enrolled to understand the significance and reliability of new techniques under consideration. Indeed, the polygraph – for instance – has often been compared to fingerprinting or DNA evidence when deciding on its admissibility. The difficulty that those supporting fMRI lie detection face is in seeking to make it amenable to a complex system in which notions of responsibility, guilt, lying and truth are constantly at play between rules, precedents, expert evidence and legal talk during trials. Take, for instance, the following quote from the closing statement of the prosecution in the first trial of Gary Smith:

“It’s been 18 months since Michael McQueen was buried.  This defendant shot him.  It’s time for justice.  Healing begins when justice occurs. And the only just verdict in this case, the only proper verdict in this case is to hold that person responsible for [what] the physical evidence shows, no matter what experts you want to believe, that he was right there when he was shot.  What your common sense and understanding shows [is] that you don’t stage a scene, you don’t throw away a gun, you don’t lie and lie and lie and lie to the police, unless you’re guilty as sin.”

This decision seems, on the face of it, to be a further defeat for the corporations seeking to admit fMRI lie detection evidence. However, advocates of exclusion, such as the eminent law professor and scientifically well-informed Hank Greely of Stanford Law School, whose testimony has been influential in these early cases, might be wise not to rest on their laurels. There are likely to be more, and perhaps more significant, spaces in which the battle over fMRI lie detection will be fought. Importantly, however, these are not as easy to pin down as the criminal trial courts. Lessons from the history of the polygraph regarding its entanglement with governance teach us that exclusion from criminal trial courts is far from the end of the story as goes lie detection.

The rhetorical construction of the polygraph as the lie detector contributed to its being adopted in a wide range of spaces in the USA. For example, the polygraph continues to be used in the context of employee screening, espionage, police interrogations and private investigation. Or take one further example: paedophilia. In a week or so I’ll be reporting on the use of the polygraph machine in the context of sex and criminal sexual behaviour, where it is now commonly used in the USA to manage paedophiles during post-conviction probation. Such use of the polygraph was recently trialled in several UK regions and looks set to be taken-up more broadly. As such, even though the device was barred from criminal courts in Frye, the polygraph has since been used in very particular, but also very important, legal spaces close, but just outside of the trial.

fMRI appears to be going the same way. The No Lie MRI website boasts that: “The technology used by No Lie MRI represents the first and only direct measure of truth verification and lie detection in human history!” This rhetoric of direct measurement has been a key one in the articulation of the fMRI machine’s potency for lie detection. Constituting the brain as the location of truth and thus of lying, the fMRI researchers have frequently claimed that we are looking directly into the lie. This has been used to help position the fMRI machine as the natural successor to the polygraph by positing it as a technological improvement on the polygraph’s indirect measurement. No Lie MRI claims: “lie detection has an extremely broad application base. From individuals to corporations to governments, trust is a critical component of our ability to peacefully and meaningfully coexist with other persons, businesses, and governments.”

Irrespective of whether the technique ‘works’, more attention should be paid to the complex way in which lie detection evidence is negotiated in relation to medical diagnoses, like PTSD, and other technologies such as blood spatter, DNA or the polygraph. Moreover, we have to better understand how lie detection techniques have dispersed into the US (and now into other countries, such as here in the UK), how they are used and what their consequences are, in order to better respond to fMRI’s emergence as a lie detector. Otherwise, fMRI may be excluded from criminal trials in much the same way as the polygraph but still find use in a variety of significant social, legal and political spaces that are far more difficult to control. The fMRI machine looks new and shiny but as regards lie detection, it might all be a little bit of history repeating.

Learning to Read

What I understand this recent post by Neuroskeptic (an excellent blog I thoroughly recommend) to be about is a general frustration with social scientists not writing in such a way that they can be easily understood across disciplines, particularly with reference to the natural sciences. As a sociologist, first let me say that I do believe the Campaign for Plain English is a valuable one and I believe such efforts should apply, in part, to social science writing. However, natural scientists’ complaints about social scientists’ writing are something of a pet peeve of mine. I work in interdisciplinary contexts and, on a day-to-day basis, this means I have to try to understand the language of my natural science colleagues whether they’re collaborators or, since I study science, an object of knowledge. I have something of an advantage in this respect since my first degree was in biology and I can lean on some of that knowledge when, for example, trying to understand the paper I am about to use as an example of neuroscience language.

Taking the first research article in the most current edition of Nature Neuroscience I found the following abstract:

“In the postnatal and adult mouse forebrain, a mosaic of spatially separated neural stem cells along the lateral wall of the ventricles generates defined types of olfactory bulb neurons. To understand the mechanisms underlying the regionalization of the stem cell pool, we focused on the transcription factor Pax6, a determinant of the dopaminergic phenotype in this system. We found that, although Pax6 mRNA was transcribed widely along the ventricular walls, Pax6 protein was restricted to the dorsal aspect. This dorsal restriction was a result of inhibition of protein expression by miR-7a, a microRNA (miRNA) that was expressed in a gradient opposing Pax6. In vivo inhibition of miR-7a in Pax6-negative regions of the lateral wall induced Pax6 protein expression and increased dopaminergic neurons in the olfactory bulb. These findings establish miRNA-mediated fine-tuning of protein expression as a mechanism for controlling neuronal stem cell diversity and, consequently, neuronal phenotype.”

This abstract is full of technical terminology that makes is practically impenetrable to anyone without postgraduate degrees in a biological science. Using my limited training in biology I now understand much of this after a couple of reads but I am quite certain that a colleague in the Department of Sociology without such knowledge would be quite unable to. Not because it is badly written. Indeed, it is quite clearly written. Rather, it is full of jargon and as such it becomes difficult to follow the meaning of the sentences and to keep in mind the sentences’ meanings as you move through the paragraph. This would not be so for someone used to reading ‘phenotype’, ‘transcribed’ or ‘dopaminergic neurons’, etc.  Conversely, the article used in neurocritic’s blog post from Health is – to my jargon-ready mind – quite clearly written. In fact, I was rather surprised at how clear it was since I was expecting something worthy of Sokal’s biting criticisms . I didn’t struggle with ‘ideology’ or ‘hegemonic’ because I’m used to reading these. I know what they mean without having to take time to look them up. As such, I can read the abstract and understand what it is arguing in much the same way that a neuroscientist or scholar from a related discipline would read the Nature Neuroscience abstract.

Importantly, neurocritic’s re-writing of the Health article’s abstract doesn’t only make it easier to understand for someone outside the circle, as it were, it also means that it loses some of its technical specificity. Society, for example, isn’t the same as ideology. How men construct a ‘body project’ isn’t just how they ‘think’ about such things. An ‘analysis’ does not tell me nearly as much as ‘a thematic analysis’ and the blog’s reference to ‘conventional’ masculinities is not quite the same as the article’s ‘hegemonic’ masculinities.

Furthermore, the rest of the Health article proceeds to deal with these technical terms in, frankly, a rather basic manner, providing definitions that would be relatively easily understood by an interdisciplinary readership that one would expect of Health. The same article written by the same authors for a different journal would, I suspect, be far more opaque to a non-expert. This is as much to say that there are limits on this project of making work understandable outside of its immediate disciplinary context. Take the example of ‘phenotype’. We could change this to be ‘physical and behavioural characteristics’ and we might not lose too much of the technical specificity. This is similar to what neurocritic has done by turning ‘hegemonic’ into ‘conventional’ in  the Health abstract. However, we couldn’t really find an easy alternative to ‘dopaminergic’ or ‘transcription’ without actually explaining what those mean and thus making the piece infuriatingly simple and overly long for anyone reading the article with the requisite expertise. This is true of ‘interpellation’, which is a similarly technical term that would need explanation and not simply a plain English substitute.

Of course, natural scientists have little time to engage substantively with a discipline almost entirely alien to them because they are exceptionally busy people. I understand and sympathise. In the hinterlands of interdisciplinarity this is not quite the case and so an article in Health would do well to try and be as clear as possible, which I firmly believe this article has done. This gets to the crux of my irritation. The problem is that natural scientists, from my personal experience, expect social scientists to do all the work of trying to explain their findings in as simple and general a language as possible. Partly this is because they are busy, as I say. But it is also because they seem to have the underlying assumption that our technical terms are just there to make us sound clever whereas their technical terms are essential to properly characterising phenomena and communicating efficiently. This is not the case. Social scientists have created an expert language to describe social phenomena and whilst those of us engaged in efforts to exchange knowledge across the historical divide will do our best to explain these technical terms and to reduce our use of them, we cannot do all the work. Natural scientists must, if they genuinely wish to benefit from scholarship outside of their fields, take time to learn the language, just like they did when they took their undergraduate and postgraduate degrees in physics, biology, neuroscience, etc. Finally, the demand that we make our social scientific work more available to our colleagues in other disciplines invokes a particular instrumental and power-laden position. In general, social scientists, if they want to be heard, have impact and contribute to solving problems, are forced to give way to the needs of natural scientists because – institutionally, nationally and locally – they have more power in defining the circumstances of interdisciplinary work and in defining the problems to which it is oriented. Perhaps we social scientists could do more to explain ourselves, but natural scientists could do a lot more to try and understand.