Synthetic Biology’s Second World

Synthetic Biology’s Second World

Secrecy has long been a part of scientific and innovation practices. Being an ethnographer of laboratories, one occasionally comes up against a barrier of entry to a secret lab or space within a building, protected by intellectual property agreements, military or government contracts. Of course, military science is often conducted in secret, on nuclear, biological or chemical weapons, amongst other things. In his excellent book on ‘Secrecy and Science’, Brian Balmer describes how the Manhattan Project epitomised the way in which scientific secrecy operates at various levels of social organisation:

It was, in fact, an almost unprecedented organisation of not only scientists, but also industry and military. Moreover, a significant feature that accounts for the success of the Manhattan Project is the preoccupation with secrecy at the various sites involved in creating the atomic bomb. Compartmentalisation, telling people information on a strict need-to-know basis, meant only a few people had a complete overview of the project. […] In this manner, efficiency, security, bureaucracy and secrecy all came together at once. (Balmer, 2013: 8)

A poster from the Manhattan Project reminding scientists about the importance of secrecy.

By their nature, it is often the most controversial, risky and ethically dubious research programmes that are conducted in secret, curtained-off from society in order to protect knowledge and technology not only from public scrutiny but also espionage or corporate theft. Thus when we find out that science has been conducted in secret we are generally right to be suspicious, and it should be no surprise that a meeting convened earlier this week, behind closed doors at Harvard, on the prospect of synthesising the human genome, has caused a stir.

Human DNA base pairs

The meeting was convened to discuss the prospects of coordinating a large collaborative venture to follow-up on the Human Genome Project (HGP), that would, over the next decade, seek to construct an entire human genome in a cell line. Currently unfunded but to be prospectively titled ‘HGP-Write: Testing Large Synthetic Genomes in Cells’, it is backed by some of the biggest names in the field.

As the New York Times reports the meeting was invite-only and “The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.” In this regard, it would seem that scientists hosting the meeting wanted for the event to be part of what we could conceptualise – following the sociologist, Georg Simmels’ well-known work on secrecy – as synthetic biology’s ‘second world’. As Simmel argued:

Secrecy secures, so to speak, the possibility of a second world alongside of the obvious world, and the latter is most strenuously affected by the former. Every relationship between two individuals or two groups will be characterized by the ratio of secrecy that is involved in it. Even when one of the parties does not notice the secret factor, yet the attitude of the concealer, and consequently the whole relationship, will be modified by it. (Simmel, 1906: 462)

Synbiophobia phobia poster

A second world for synthetic biology is probably quite appealing to scientists working in the field, a space in which they could run-wild with their ideas without the worry of what a supposedly fearful public might think. Synthetic biologists, for the large part, expect the public will be inappropriately scared of developments in the field. This has led to what Claire Marris (2015) calls ‘synbiophobia phobia’ – the fear that scientists have that the public will fear their work.

Synbiophobia phobia might be at the root of the decision to hold the meeting in private, as the organisers likely anticipated public fear at the potential of creating a human genome from scratch. But, as Simmel’s notion reminds us, no matter whether parties kept in the dark find out about the secrets being kept or not, the effect of secrecy is to change the attitude of the concealer and consequently the whole relationship between scientists and civil society.

DARPA Vector Logo.epsContrary to some scientist’s reactions to the media response to the closed meeting, secrecy in synthetic biology isn’t just a fiction created by newspapers and magazines to whip-up a story. The field does have at least the beginnings of a second world, divorced from public scrutiny, then it is almost certainly going to be tied to the Defense Advanced Research Projects Agency (DARPA), which has had a keen interest in the field since its fledgling years and has invested tens of millions into synthetic biology under the remit of the Biological Technologies Office.

However, speaking to the NYT, George Church, one of the most prominent advocates of synthetic biology and co-organiser of the Harvard meeting, argued that the event had been misconstrued and that the secrecy was actually about protecting a paper currently under review that, if published, would make the ideas for the project publicly-available and thus transparent. But as the invite read, “We intentionally did not invite the media, because we want everyone to speak freely and candidly without concerns about being misquoted or misinterpreted as the discussions evolve.” Whatever the motivation for the closed-doors, invite-only meeting, the effect of concealment might well be the same: it implies that something suspicious is going on.

In this regard, the scientists have shot themselves in the foot. The meeting will worry people, even those who support synthetic biology in general. In fact, one of the most well-known advocates for synthetic biology, Drew Endy, refused to attend and co-authored an open letter criticising the closed meeting. It is only a matter of time until those more critical voices and outright enemies of synthetic biology seize on the secrecy of the meeting as further evidence of untoward ambitions for the field. It would be a mistake, though, to see this as unwarranted fear and ignorance. It has much more to do with the facts of synthetic biology and how it is being developed in relation to corporate interests. As Endy and Zoloth’s (2016: 2) letter argued:

The creation of new human life is one of the last human-associated processes that has not yet been industrialized or fully commodified. It remains an act of faith, joy, and hope. Discussions to synthesize, for the first time, a human genome should not occur in closed rooms.

Two of the common tenets of the emerging frameworks for responsible research and innovation, which has been closely tied to the development of synthetic biology, are the importance of scientific transparency and of deliberative governance processes. The UK Synthetic Biology Roadmap, for example, includes a commitment that the Synthetic Biology Leadership Council should “should provide an exemplar of openness and transparency with two-way stakeholder engagement as a core principle.” (SBRCG, 2012: 32)

Transparancey: More than a window into the lab

But transparency is easier invoked than it is implemented. If scientists are going to take responsible research and innovation seriously, then actually implementing transparency and deliberation is going to be crucial, especially when the choices about such things are immediately within their control, as was the case this week. A second world for synthetic biology might be appealing in principle, but in practice it risks bringing about exactly the kinds of public fears that scientists and engineers worry about.


Balmer, B. (2013) Secrecy and science: A historical sociology of biological and chemical warfare. Surrey: Ashgate.

Marris, C. (2015). The construction of imaginaries of the public as a threat to synthetic biology. Science as Culture24(1), 83-98.

Simmel, G. (1906) The sociology of secrecy and of secret societies. The American Journal of Sociology11(4), 441-498.

SBRCG (2012) A Synthetic Biology Roadmap for the UK,

Three Questions I get Asked

Here are three questions that natural scientists and engineers often ask me, and which are commonly asked of social scientists participating in interdisciplinary collaborations. 

  1. Why do people not like X?

e.g. why do people not like synthetic food additives?

This question usually has to do with a perception that scientists, or a particular field of scientific work, is not viewed favourably by ‘the public’ or by industry, governments, NGOs, and so on. Natural scientists and engineers sometimes have the impression that social scientists will be able to explain why it is that their technical ambitions have been thwarted by a political misconception or misunderstanding.

The question is sometimes embedded in a range of other faulty assumptions about public understanding of science and science communication. As a question, it can position social scientists as experts in the irrational behaviour of individuals and groups. Social scientists can also be positioned through this style of questioning as brokers, and so be expected to help to open governance and/or public doors.

I’d rather talk about:

What do people actually say and do in relation to X? What practices are involved in regards to X and how do you envision X changing, supplementing or supplanting those practices?

When it comes to something like food additives, for example, I’d be interested to know how people make sense of ‘synthetic’ and ‘natural’ from within cooking, eating, feeding and caring practices. We could ask in what ways are these concepts important to the organisation of such practices, or in what ways do these practices organise our demarcations of those concepts? It could be important to explore how novel methods of producing food additives sit alongside or displace existing methods of production, and with what global socioeconomic implications.

In this regard we could begin to understand why the notion or production of synthetic food additives might raise socio-political, economic or ethical questions from publics, NGOs, governments and so forth, rather than assuming people don’t like it because of ignorance.


  1. What will people think of X?

e.g. what will people think of brain-based lie detection?

This type of question usually has to do with natural scientists’ worries about what ‘the public’ will think about their planned innovations or technical recommendations. It comes from a recognition that publics are interested and invested in scientific knowledge production. However, it is often motivated by a desire to ensure that people will think positively about X and so sometimes accompanied by a will to understand how to encourage people to feel positively about X.

The question is sometimes embedded in a range of other faulty assumptions about how people’s negative feelings about proposed technologies will inexorably lead to market failure and that this is necessarily a bad thing. It positions social scientists as PR gurus or pollsters, who can help to take the public temperature and design marketing strategies for innovations.

I’d rather talk about:

What kinds of imagined (necessary, likely, possible or peripheral) actions, routines, relations and social structures are being embedded in the sociotechnical work being conducted? In other words, putting the emphasis not so much on the object of technical interest but on how the envisaged object might impact upon existing practices of life, relations and social order, or open-up new practices. Do we want these kinds of practices and why/ why not? What effects will these changes have and with what implications for people’s experiences of social and technical phenomena?

In the given example of brain-based lie detection, I would want to know more about how we do lie detection now in different contexts and why we do it that way. I’d be interested to know how brain-based techniques would change these contexts if implemented and what implications such changes might have for our ways of living with each other. If we were talking about using brain scanning as evidence for use in legal proceedings, for example, we’d have to think carefully about why we currently use the investigation, interview, interrogation and jury systems. How would brain-based technologies fit in these practices and what would change? What kinds of life and forms of justice are implicated in such changes?

In this regard we could begin to unpick some of the tangle of concepts, practices, norms, politics and so on that are bound up with our current ways of doing lie detection and thus better understand what would be at stake for someone who is asked to give an opinion on a new lie detection technology.


  1. Is it okay to do X?

e.g. is it okay to engineer life?

This question usually has to do with a perceived ethical ‘implication’ of some proposed technical innovation. The questions often centre on technical objects. They involve a recognition that sociotechnical innovation generally implies changes in how things are done or understood. They might have to do with abstract concepts like life or nature. However, by emphasising the objects of technical innovation or abstract questions these kinds of concerns largely miss the everyday practices that are at the heart of how ethical decisions and dispositions are made and formed.

This type of question is sometimes embedded in a range of assumptions about scientific objectivity and how ethical implications arise only from the implementation of knowledge and new technologies in the world rather than in the practices of knowledge production itself. In addition, such questions often come with the implication that X is going to happen anyway, but it would be good to know what moral status it is going to be given when it does happen.

This style of questioning is more comfortable for some social scientists than others, since some of us are experts in ethics. However, in the way the question is generally posed it positions social scientists as ethical arbiters, who themselves are being asked to judge the moral status of objects and so assess the social value of proposed innovations in order to help scientists justify actions they know they are going to take. This is a bit tricky and can be a far less comfortable space to inhabit.

I’d rather talk about:

What kinds of moral or ethical values are embedded in the scientific practices out of which the question has emerged? In other words, what has been decided about ethics already, what kinds of questions have been closed off and with what justification?

I’d also be looking to explore what kinds of ethics and norms are used in contexts in which the proposed X is being invoked. Are there differences in the ethical frameworks used to think about X across different spaces and times and in what ways do these differ? If there are differences of opinion about the ethical valence of X how do we decide amongst such opinions in governance, regulation, and technical innovation practices?  

Bacterial Cultures: How engineers make sense of microorganisms

Sewage Facility
Vast pools of sewage stretch into the distance at a facility

This post relates to a paper I recently published, which you can download here

Conducting an ethnography of a sewage facility, in the midst of sludge and mud and sewage I found my sociological nous rather tested: what was there to say about water treatment processes that could be remotely interesting? Everything was brown and wet. So it was more than my professional lenses that were steaming. Indeed, the stink permeated every fabric and seemed to coat every surface. The process engineers in the facility have a number of strategies for keeping themselves safe in this perilous space. A lot of these techniques have to do with routine practices of managing their bodies: becoming physically comfortable with the grime (“You get used to it.”), improving their balance when walking on wet mud (“My first fall nearly broke my back, so I was careful after.”), wearing two pairs of gloves and socks, and learning a variety of safety procedures (for climbing ladders and so on). Moreover, the process engineers use a range of scripts, anecdotes and narratives for understanding their relationship with the dirt and grime, which was particularly interesting to me when it involved their understandings of microorganisms.

The vast pools of brown sludge that bubble and threaten to engulf you if you trip are also full of bacteria. Intriguingly, the bacteria are similarly understood in relation to the body and practices of maintaining the body’s boundaries. There is an ambiguity about whether engineers understand their bodies as safely protected and separated out from the bacteria (practices are largely organised around covering and cleaning the body) or become attuned to their relation with the bacteria. This is because the engineers tell stories of immunological adaptation. Their stories often involve anecdotes of visitors getting sick, or of new employees getting used to the bacteria. A new recruit to a facility I visited described how he’d had stomach aches and felt sick on-and-off for the first few weeks of working there. He told me that his body had ‘adapted’ and that he now didn’t get sick from the bacteria. Your immune system, he said, “get’s used to them.” The grime is thus a permanent symbol of the omnipresence of bacteria – there’s no escaping them. Rather than coding the bacteria as simply ‘threatening’ or ‘dangerous’ the engineers ‘adapt’ their bodies and practices in ways that alter their understandings both of their bodies and of the bacteria. In this regard, the ontological status of the engineering body in the waste facility is importantly related to the ontological status of bacteria.

These observations of sewerage facilities formed part of our work on synthetic biology, which also involved observations in microbiology laboratories. In the lab, everything is suddenly white. White walls, white stools, white counters, white powders, white liquids, white equipment. I find myself wearing white lab coats. White gloves at the entrance, white gloves on the counter, white gloves stacked onto shelves three deep and ten across. I get used to the feeling of nitrile gloves. The lab is a place where things have to be clean, so there’s also a noxious smelling alcohol routinely spritzed and wiped across the counters.

I hold some bacteria in solution in a falcon tube.

Some of the labs are temperature controlled, whereas others are fine at room temperature. Some have distinctive whirring, clicking or screeching noises created by the specialist equipment.  But although they differ in important ways, they nonetheless share a significant common feature: they are all organised in order to safely manipulate the microbiological realm. This organisation, however, is ultimately determined not by the deterrence of release of these unique organisms, but by prevention of infiltration of more common species that might contaminate the lab apparatuses and thus ruin experiments. Indeed, in the lab the bacteria are at risk. They are engineered to be easy to manipulate and have thus become more susceptible to being out-competed by ‘natural’ bacteria. Practices in the lab are thus geared primarily around protecting the bacteria from the body. And whilst academic engineers are aware of the dangers of the bacteria with which they work, which are often quite minimal, they are generally most concerned with the dangers that their skin (covered with enzymes), the air (filled with stronger strains) and their equipment pose for their microorganisms. Indeed, the bacteria even look puny – tiny quantities in tiny flasks that are barely visible to the naked eye. This is a wholly different world to that of the sewage plant and the bodies of lab engineers are thus importantly different because of these different relations.

This is an important consideration when examining the governance and regulation of engineered microorganisms and when discussing ‘public’ understandings of this science. We explore these issues through other ethnographic episodes in the paper.

Bacterial cultures: Ontologies of bacteria and engineering expertise at the nexus of synthetic biology and water services, Engineering Studies

You can get the paper here: