From Films from the Future: The Technology and Morality of Sci-Fi Movies by Andrew Maynard
“If a plague exists, do you know how many
governments would want it and what they’d do
to get it?”
—Sienna Brooks
In 1969, the celebrated environmentalist Paul Ehrlich made a stark
prediction. In a meeting held by the British Institute of Biology, he
claimed that, “By the year 2000, the United Kingdom will simply be
a small group of impoverished islands, inhabited by some seventy
million hungry people, of little concern to the other five to seven
billion inhabitants of a sick world.”[^156]
It’s tempting to quip that Ehrlich was predicting the fallout from
Brexit and the UK’s departure from Europe, and his crystal ball
was simply off by a few years. But what kept him up at night, and
motivated the steady stream of dire warnings flowing from him, was
his certainty that human overpopulation would lead to unmitigated
disaster as we shot past the Earth’s carrying capacity.
I left the UK in 2000 to move to the US, and I’m glad to say that, at
the time, the United Kingdom was still some way from becoming
that “small group of impoverished islands.” Yet despite the nation’s
refusal to bow to Ehrlich’s predictions, his writings on population
crashes and control have continued to capture the imaginations of
people over the years, including, I suspect, that of author and the
brains behind the movie Inferno, Dan Brown.
The movie Inferno is based on the book of the same name by
Dan Brown. It’s perhaps not the deepest movie here, but if
you’re willing to crack open the popcorn and suspend disbelief,
it successfully keeps you on the edge of your seat, as any good
mindless thriller should. And it does provide a rather good starting
point for examining the darker side of technological innovation—
biotechnology in particular—when good intentions lead to
seemingly logical, but not necessarily moral, actions.
Inferno revolves around the charismatic scientist and entrepreneur
Bertrand Zobrist (played by Ben Foster). Zobrist is a brilliant
biotechnologist and genetic engineer who’s devoted to saving the
world. But he has a problem. Just like Ehrlich, Zobrist has done the
math, and realized that our worst enemy is ourselves. In his geniuseyes, no matter what we do to cure sickness, improve quality of
life, and enable people to live longer, all we’re doing is pushing the
Earth ever further beyond the point where it can sustain its human
population. And like Ehrlich, he sees a pending future of disease
and famine and death, with people suffering and dying in their
billions, because we cannot control our profligacy.
Zobrist genuinely wants to make the world a better place. But he
cannot shake this vision of apocalyptic disaster. And he cannot
justify using his science for short-term gains, only for it to lead
to long-term devastation. So he makes a terrible decision. To save
humanity from itself, he creates a genetically engineered virus that
will wipe out much of the world’s population—plunging humanity
back into the dark ages, but giving it the opportunity to reset and
build a more sustainable future as a result. And because it seems
that genius entrepreneurs can’t do anything simply, he arranges for
the virus to be elaborately released at a set time in a mysterious
location somewhere in Europe.
The problem is, the authorities are onto him—the authorities in this
case being an entertainingly fictitious manifestation of the World
Health Organization. As the movie starts, Zobrist is being pursued
by WHO agents who chase him to the top of a bell tower in the
Inferno: Immoral Logic in the Age of Genetic Manipulation
I don’t know if Brown and Ehrlich have ever met. I’d like to think
that they’d get on well. Both have a knack for a turn of phrase
that transforms hyperbole into an art form. And both have an
interest in taking drastic action to curb an out-of-control global
human population.
Italian city of Florence where, rather than reveal his secrets, Zobrist
jumps to his death. But in his pocket, he conveniently has a device
that holds the key to where he’s hidden the virus.
This is where Dan Brown brings in his “symbologist” hero, Harvardbased Robert Langdon (Tom Hanks). Langdon, having proven
himself to be rather good at decoding devilishly complex puzzles in
the past, is the ideal person to follow the trail and save the world.
But he quickly finds himself unwittingly wrapped up in a complex
subterfuge where he’s led to believe the WHO are the bad actors,
and it’s up to him and a young doctor, Sienna Brooks (Felicity
Jones), to track down the virus before they get to it.
What follows is a whirlwind of gorgeous locations (Florence, Venice,
Istanbul), misdirection, plot twists, and nail-biting cliffhangers.
We learn that Sienna is, in fact, Zobrist’s lover, and has been using
Langdon to find the virus so she can release it herself. We also learn
that she’s fooled a clandestine global security organization (headed
up by Harry Simms, who’s played perfectly by Irfan Khan) into
helping her, and they set about convincing Langdon he needs to
solve the puzzle while evading the WHO agents.
The movie ends rather dramatically with the virus being contained
just before it’s released. The bad folks meet a sticky end, Langdon
saves the world, and everyone still standing lives happily ever after.
Without doubt, Inferno is an implausible but fun romp. Yet it
does raise a number of serious issues around science, technology,
and the future. Central to these is the question that Paul Ehrlich
and Bertrand Zobrist share in common: Where does the moral
responsibility lie for the future of humanity, and if we could act
now to avoid future suffering—even though the short-term cost may
be hard to stomach—should we? The movie also touches on the
dangers of advanced genetic engineering, and it brings us back to
a continuing theme in this book: powerful entrepreneurs who not
only have the courage of their convictions, but the means to act on
what they believe.
Let’s start, though, with the question of genetically engineering
biological agents, together with the pros and cons of engineering
pathogens to be even more harmful.
In 2012, two groups of scientists published parallel papers in the
prestigious journals Science[^157] and Nature[^158] that described, in some
detail, how to genetically engineer an avian influenza virus. What
made the papers stand out was that these scientists succeeded
in making the virus more infectious, and as a result, far deadlier.
The research sparked an intense debate around the ethics of such
studies, and it led to questions about the wisdom of scientists
publishing details of how to make pathogens harmful in a way that
could enable others to replicate their work.
Inferno: Immoral Logic in the Age of Genetic Manipulation
The teams of scientists, led by virologists Ron Fouchier and
Yoshihiro Kawaoka, were interested in the likelihood of a highly
pathogenic flu virus mutating into something that would present a
potentially catastrophic pandemic threat to humans. The unmodified
virus, referred to by the code H5N1, is known to cause sickness
and death in humans, but it isn’t that easy to transmit from person
to person. Thankfully, the virus isn’t readily transmitted by coughs
and sneezes, and this in turn limits its spread quite considerably.
But this doesn’t mean that the virus couldn’t naturally mutate to
the point where it could successfully be transmitted by air. If this
were to occur (and it’s certainly plausible), we could be facing a flu
pandemic of astronomical proportions.
To get a sense of just how serious such a pandemic could be, we
simply need to look back to 1918, when the so-called “Spanish flu”
swept the world.[^159] The outbreak of Spanish flu in the early 1900s
is estimated to have killed around fifty million people, or around 3
percent of the world’s population at the time. If an equally virulent
infectious disease were unleashed on the world today, this would be
equivalent to over 200 million deaths, a mind-numbing number of
people. However, the relative death toll would likely be far higher
today, as modern global transport systems and the high numbers
of people living close to each other in urban areas would likely
substantially increase infection rates.
It’s this sort of scenario that keeps virologists and infectious-disease
epidemiologists awake at night, and for good reason. It’s highly
likely that, one day, we’ll be facing a pandemic of this magnitude.
Viruses mutate and adapt, and the ones that thrive are often those
that can multiply and spread fast. Here, we know that there are
combinations of properties that make viruses especially deadly,
including human pathogenicity, lack of natural resistance in people,
and airborne transmission. There are plenty of viruses that have
one, or possibly two, of these features, yet there are relatively few
that combine all three. But because of the way that evolution and
biology work, it’s only a matter of time before some lucky virus hits
the jackpot, much as we saw back in 1918.
Because of this, it makes sense to do everything we can to be
prepared for the inevitable, including working out which viruses
are likely to mutate into deadly threats (and how) so we can get
our defenses in order before this happens. And this is what drove
Fouchier, Kawaoka, and their teams to start experimenting on H5N1.
H5N1 is a virus that is deadly to humans, but it has yet to evolve
into a form that is readily transmitted by air. What interested
Fouchier and Kawaoka was how likely it was that such a mutation
would appear, and what we could do to combat the evolved virus
if and when this occurs. To begin to answer this question, they
and their teams of scientists intentionally engineered a deadly
new version of H5N1 in the lab, so they could study it. And this
is where the ethical questions began to get tricky. This type of
study is referred to as “gain-of-function” research, as it increases
the functionality and potential deadliness of the virus. Maybe not
surprisingly, quite a few people were unhappy with what was being
done. Questions were asked, for instance, about what would happen
if the new virus was accidentally released. This was not an idle
question, as it turns out, given a series of incidents where infectious
agents ended up being poorly managed in labs.[^160] But it was the
decision to publicly publish the recipe for this gain-of-function
research that really got people worried.
Both Science and Nature ended up publishing the research and the
methods, but only after an intense international debate about the
wisdom of doing so.[^161] However, the decision was, and remains,
controversial. Proponents of the research argue that we need to be
ready for highly pathogenic and transmissible strains of flu before
Concerns like this prompted a group of scientists to release a
Consensus Statement on the Creation of Potential Pathogens in 2014,
calling for greater responsibility in making such research decisions.[^162]
These largely focused on the unintended consequences of wellmeaning research. But there was also a deeper-seated fear here:
What if someone took this research and intentionally weaponized
a pathogen?
This was one of the issues considered by the US National Science
Advisory Board for Biosecurity as it debated drafts of the H5N1
gain-of-function papers in 2011. In a statement released on
December 20, 2011, the NSABB proposed that that the papers
should not be published in their current form, recommending “the
manuscripts not include the methodological and other details that
could enable replication of the experiments by those who would
seek to do harm.”[^163] However, this caused something of a furor at
the time among scientists. The NSABB is an advisory body in the
US and has no real teeth, yet its recommendations drew accusations
of “censorship”[^164] in a scientific community that deeply values
academic freedom.
The NSABB eventually capitulated, and supported the publication
of both papers as they finally appeared in 2012—including the
embedded “how-to” instructions for creating a virulent virus.[^165]
But the question of intentionally harmful use remained. And it’s
concerns like this that underpin the plot in Inferno.
Fouchier, Kawaoka, and their teams showed that it is, in principle,
possible to take a potentially dangerous virus and engineer it into
something even more deadly. To the NSABB and others, this raised
Inferno: Immoral Logic in the Age of Genetic Manipulation
they inevitably arise, and this means having the ability to develop
a stockpile of vaccines. This in turn depends on having a sample of
the virus to be protected against. But this type of research makes
many scientists uneasy, especially given the challenges of preventing
inadvertent releases.
a clear national security issue: What if an enemy nation or a terrorist
group used the research to create a weaponized virus? Echoes of
this discussion stretched back to the 2001 anthrax attacks in the US,
where the idea of “weaponizing” a pathogenic organism became part
of our common language. Since then, discussions over whether and
how biological agents may be weaponized have become increasingly
common.
Intuitively, genetically engineering a virus to weaponize it feels
like it should be a serious threat. It’s easy to imagine the mayhem
a terrorist group could create by unleashing an enhanced form
of smallpox, Ebola, or even the flu. Thankfully, most biosecurity
experts believe that the risks are low here. Despite these imagined
scenarios, it takes substantial expertise and specialized facilities to
engineer a weaponized pathogen, and even then, it’s unclear that
the current state of science is good enough to create an effective
weapon of terror. More than this, though, most experts agree that
there are far easier and cheaper ways of creating terror, or taking
out enemy forces, than using advanced biology. And because of
this, it’s hard to find compelling reasons why an organization would
weaponize a pathogen, rather than using far easier and cheaper
ways of causing harm. Why spend millions of dollars and years
of research on something that may not work, when you can do
more damage with less effort using a cell phone and home-made
explosives, or even a rental truck? The economics of weaponized
viruses simply don’t work outside of science fiction thrillers and
blockbuster movies. At least, not in a conventional sense.
And this is where Inferno gets interesting, as Zobrist is not terrorist
in the conventional sense. Zobrist’s aim is not to bring about change
through terror, but to be the agent of change. And his mechanism
of choice is a gain-of-function genetically engineered virus. Unlike
the potential use of genetically modified pathogens by terrorists,
or even nation-states, the economics of Zobrist’s decision actually
make some sense, warped as they are. In his mind, he envisions
a cataclysmic future for humanity, brought about through outof-control overpopulation. and he sees it as a moral imperative
to use his expertise and wealth to help avoid it, albeit by rather
drastic means.
As this is movie make-believe, the technology Zobrist ends up
developing is rather implausible. But it’s not that far-fetched.
Certainly, we know from the work of Fouchier, Kawaoka, and
others that it is possible to engineer viruses to be more deadly
Some years ago, my wife gave me a copy of Daniel Quinn’s book
Ishmael. The novel, which won the Turner Tomorrow Award in
1991, has something of a cult following. But I must confess I was
rather disturbed by the arguments it promoted. What concerned me
most, perhaps, was a seemingly pervasive logic through the book
that seemed to depend on “ends,” as defined by a single person,
justifying extreme “means” to get there. Echoing both Paul Ehrlich
and Dan Brown, Quinn was playing with the idea that seemingly
unethical acts in the short term are worth it for long-term prosperity
and well being, especially when, over time, the number of people
benefitting from a decision far outnumber those who suffered as a
consequence.
Ishmael is a Socratic dialogue between the “pupil”—the narrator—
and his “teacher,” a gorilla that has the power of speech and reason.
The book uses this narrative device to dissect human history and
the alleged rise of tendencies that have led to a global culture of
selfish greed, unsustainable waste, and out-of-control population
growth. The book is designed to get the reader to think and reflect.
In doing so, it questions our rights as humans above those of other
organisms, and our obligations to other humans above that to
the future of the Earth as a whole. Many of the underlying ideas
in the book are relatively common in environmentalist thinking.
What Ishmael begins to illuminate, though, is what happens when
some of these ideas are taken to their logical conclusions. One of
those conclusions is that, if the consequence of a growing human
population and indiscriminate abuse of the environment is a
sick and dying planet, anything we do now to curb our excesses
is justified by the future well-being of the Earth and its many
ecosystems. The analogy used by Quinn is that of a surgeon cutting
out a malignant cancer to save the patient, except that, in this case,
Inferno: Immoral Logic in the Age of Genetic Manipulation
than their naturally-occurring counterparts. And we’re not that
far from hypothetically being able to precisely design a virus
with a specific set of characteristics, an ability that will only
accelerate as we increasingly use cyber-based technologies and
artificial-intelligence-based methods in genetic design. Because of
these converging trends in capabilities, when you strip away the
hyperbolic narrative and cliffhanger scenarios from Inferno, there’s
a kernel of plausibility buried in the movie that should probably
worry us, especially in a world where powerful individuals are able
to translate their moral certitude into decisive action.
the patient is the planet, and humanity is both the cancer and the
surgeon.
This is a similar philosophy, of taking radical action in the present
to save the future, that Ehrlich promoted in his 1968 book, The
Population Bomb.[^166] As a scientist and environmentalist, Ehrlich
was appalled by where he saw the future of humanity and Planet
Earth heading. As the human population increased exponentially,
he believed that, left unchecked, people would soon exceed the
carrying capacity of the planet. If this happened, he believed we
would be plunged into a catastrophic cycle of famine, disease, and
death, that would be far worse than any preventative actions we
might take.
Ehrlich opens his book with a dramatic account of him personally
experiencing localized overpopulation in Delhi. This experience
impressed on him that, if this level of compressed humanity was
to spread across the globe (as he believed it would), we would
be responsible for making a living hell for future generations,
something he saw as his moral duty to do what he could to prevent.
In the book, Ehrlich goes on to explore ways in which policies
could be established to avoid what he saw as an impending disaster.
He also looked at ways in which people might be persuaded to
change their habits and beliefs in an attempt to dramatically curb
population growth. But he considered the threat too large to stop
at political action and persuasion. To him, if these failed, drastic
measures were necessary. He lamented, for instance, that India had
not implemented a controversial sterilization program for men as
a means of population control. And he talked of triaging countries
needing aid to avoid famine and disease, by helping only those
that could realistically pull themselves around while not wasting
resources on “hopeless cases.”
Ehrlich’s predictions and views were both extreme and challenging.
And in turn, they were challenged by others. Many of his predictions
have not come to pass, and since publication of The Population
Bomb, Ehrlich has pulled back from some of his more extreme
proposals. There are many, though, who believe that the sheer
horror of his predictions and his proposed remedies scared a
generation into taking action before it was too late. Even so, we are
still left with a philosophy which, much like the one espoused in
Ishmael, suggests that one person’s prediction of pending death and
It is precisely this philosophy that Dan Brown explores through the
character of Zobrist in Inferno. Superficially, Zobrist’s arguments
seem to make sense. Using an exponential growth model of global
population, he predicts a near future where there is a catastrophic
failure of everything we’ve created to support our affluent twentyfirst-century lifestyle. Following his arguments, it’s not hard to
imagine a future where food and water become increasingly
scarce, where power systems fail, leaving people to the mercy of
the elements, where failing access to healthcare leads to rampant
disease, and where people are dying in the streets because they are
starving, sick, and have no hope of rescue.
As well as being a starkly sobering vision, this is also a plausible
one—up to a point. We know that when animal populations get
out of balance, they often crash. And research on complex systems
indicates that the more complex, interdependent, and resourceconstrained a system gets, the more vulnerable it can become to
catastrophic failure. It follows that, as we live increasingly at the
limits of the resources we need to sustain nearly eight billion people
across the planet, it’s not too much of a stretch to imagine that
we are building a society that is very vulnerable indeed to failing
catastrophically. But if this is the case, what do we do about it?
Early on in Inferno, Zobrist poses a question: “There’s a switch.
If you throw it, half the people on earth will die, but if you don’t,
in a hundred years, then the human race will be extinct.” It’s an
extreme formulation of the ideas of Quinn and Ehrlich, and not
unlike a scaled-up version of the Trolley Problem that philosophers
of artificial intelligence and self-driving cars love to grapple with.
But it gets to the essence of the issue at hand: Is it better to kill
a few people now and save many in the future, or to do nothing,
condemning billions to a horrible death, and potentially signing off
on the human race?
Ehrlich and Quinn suggest that it’s moral cowardice to take the “not
my problem” approach to this question. In Inferno, though, Brown
elevates the question from one of philosophical morality to practical
reality. He gives the character of Zobrist the ability to follow through
on his convictions, and to get out of his philosophical armchair to
Inferno: Immoral Logic in the Age of Genetic Manipulation
destruction has greater moral weight than the lives of the people
they are willing to sacrifice to save future generations.
quite literally throw the switch, believing he is saving humanity as
he does so.
The trouble is, this whole scenario, while easy to spin into a
web of seeming rationality, is deeply flawed. Its flaws lie in the
same conceits we see in calls for action based on technological
prediction. It assumes that the future can be predicted from the
exponential trends of the past (a misconception that was addressed
in chapter nine and Transcendence), and it amplifies, rather than
moderates, biases in human reasoning and perception. Reasoning
like this creates an artificial certainty around the highly uncertain
outcomes of what we do, and it justifies actions that are driven by
ideology rather than social responsibility. It also assumes that the
“enlightened,” whoever they are, have the moral right to act, without
consent, on behalf of the “unenlightened.”
In the cold light of day, what you end up with by following such
reasoning is something that looks more like religious terrorism, or
the warped actions of the Unabomber Ted Kaczynski, than a plan
designed to create social good.
This is not to say we are not facing tough issues here. Both the
Earth’s human population and our demands on its finite resources
are increasing in an unsustainable way. And this is leading to serious
challenges that should, under no circumstances, be trivialized.
Yet, as a species, we are also finding ways to adapt and survive,
and to overcome what were previously thought to be immovable
barriers to what could be achieved. In reality, we are constantly
moving the goalposts of what is possible through human ingenuity.
The scientific and social understanding of the 1960s was utterly
inadequate for predicting how global science and society would
develop over the following decades, and as a result, Ehrlich and
others badly miscalculated both the consequences of what they
saw occurring and the measures needed to address them. These
developments included advances in artificial fertilizers and plant
breeding that transformed the ability of agriculture to support a
growing population. We continue to make strides in developing
and using technology to enable a growing number of people to live
sustainably on Earth, so much so that we simply don’t know what
the upper limit of the planet’s sustainable human population might
be. In fact, perhaps the bigger challenge today is not providing
people with enough food, water, and energy, but in overcoming
social and ideological barriers to implementing technologies in ways
that benefit this growing population.
Yet while such thinking can lead to what I believe is an immoral
logic, we cannot afford to dismiss the possibility that inaction in the
present may lead to catastrophic failures in the future. If we don’t
get our various acts together, there’s still a chance that a growing
population, a changing climate, and human greed will lead to
future suffering and death. As we develop increasingly sophisticated
technologies, these only add to the uncertainty of what lies around
the corner. But if we’re going to eschew following an immoral logic,
how do we begin to grapple with these challenges?
Perhaps one of the most difficult challenges scientists (and
academics more broadly) face is knowing when to step out of the
lab (or office) and into the messy world of politics, advocacy, and
activism. The trouble is, we’re taught to question assumptions, to be
objective, and to see issues from multiple perspectives. As a result,
many scientists see themselves as seekers of truth, but skeptical
of the truth. Because of this, many of us are uneasy about using
our work to make definitive statements about what people should
or should not be doing. To be quite frank, it feels disingenuous to
set out to convince people to act as if we know the answers to a
problem, when in reality all we know is the limits of our ignorance.
There’s something else though, that makes many scientists leery
about giving advice, and that’s the fear of losing the trust and
respect of others. Many of us have an almost pathological paranoia
of being caught out in an apparent lie if we make definitive
statements in public, and for good reason; there are few problems
in today’s society that have cut-and-dried solutions, and to claim
that there are smacks of charlatanism. More than this, though,
there’s a sense within the culture of science that making definitive
statements in public is more about personal ego than professional
responsibility.
Inferno: Immoral Logic in the Age of Genetic Manipulation
Imagine now that, in 1968, a real-life Zobrist had decided to act
on Ehrlich’s dire predictions and indiscriminately rob people of
their dignity, autonomy, and lives, believing that history would
vindicate them. It would have been a morally abhorrent tragedy
of monumental proportion. This is part of the danger of confusing
exponential predictions with reality, and mixing them up with
ideologies that adhere religiously to a narrow vision of the future, to
the point that its believers are willing to kill for the alleged longterm good of society.
The unwritten rule here sometimes seems to be that scientists
should stick to what they’re good at—asking interesting questions
and discovering interesting things—and leave it to others to decide
what this means for society more broadly. This is, I admit, something
of an exaggeration. But it does capture a tension that many scientists
grapple with as they try to reconcile their primary mission to
generate new knowledge with their responsibility as a human being
to help people not make a complete and utter mess of their lives.
Not surprisingly, these lines become blurred in areas where research
is driven by social concerns. As a result, there’s a strong tradition
in areas like public health of research being used to advocate for
socially beneficial behaviors and policies. And scientists focusing on
environmental sustainability and climate change are often working
in these areas precisely because they want to make a difference.
To many of them, their research isn’t worth their time if it doesn’t
translate into social impact, and that brings with it a responsibility to
advocate for change.
This is the domain that scientists like Paul Ehrlich and Dan Brown’s
Zobrist inhabit. They are engaged in their science because they
see social and environmental problems that need to be solved.
To many researchers in this position, their science is a means to a
bigger end, rather than being an end in itself. In fact, I suspect that
many researchers in these areas of study would argue that there is
a particular type of immorality associated with scientists who, with
their unique perspective, can see an impending disaster coming, and
decide to do nothing about it.
Here, the ethics of the scientist-advocate begin make a lot of sense.
Take this thought experiment, for instance. Imagine your research
involves predicting volcanic eruptions (just to make a change from
population explosions and genetically engineered viruses), and
your models strongly indicate that the supervolcano that lies under
Yellowstone National Park could erupt sometime in the next decade.
What should you do? Do nothing, and you potentially condemn
millions of people—maybe more—to famine, poverty, disease, and
death. Instinctively, this feels like the wrong choice, and I suspect
that few scientists would just ignore the issue. But they might say
that, because of the uncertainty in their predictions, more research is
needed, including more research funding, and maybe a conference
or two to develop the science more and argue over the results. In
other words, there’d probably be lots of activity, but very little action
To some scientists, however, this would be ethically untenable, and
an abdication of responsibility. To them, the ethical option would be
to take positive action: Raise awareness, shock people into taking
the risk seriously, hit the headlines, give TED talks, make people sit
up and listen and care, and, above all, motivate policy makers to do
something. Because—so the thinking would go—even if the chances
are only one in a thousand of the eruption happening, it’s better to
raise the alarm and be wrong than stay silent and be right.
This gets to the heart of the ethics of science-activism. It’s what lies
behind the work of Paul Ehrlich and others, and it’s what motivates
movements and organizations that push for social, political, and
environmental change to protect the future of the planet and its
inhabitants. And yet, compelling as the calculus of saved future
lives is, there is a problem. Pushing for action based on available
evidence always comes with consequences. Sadly, there’s no free
pass if you make a mistake, or the odds don’t fall in your favor.
Going back to the Yellowstone example, a major eruption could
well render large swaths of the mid-US uninhabitable. Agriculture
would be hit hard, with air pollution and localized climate shifts
making living conditions precarious for tens of millions of people.
On the other hand, preparing for a potential eruption would most
likely involve displacing millions of people, possibly leading to
coastal overcrowding, loss of jobs, homelessness, and a deep
economic recession. The outcomes of the precautionary actions—
irrespective of whether the predictions came true or not—would be
devastating for some. They may be seen as worth it in the long run
if the eruption takes place. But if it doesn’t, the decision to act will
have caused far more harm than inaction would have. Now imagine
having the burden of this on your shoulders, because you had the
courage of your scientific convictions, even though you were wrong,
and it becomes clearer why it takes a very brave scientist indeed to
act on the potential consequences of their work.
This is, obviously, an extreme and somewhat contrived example.
But it gets to the core of the dilemma surrounding individuals
acting on their science, and it underlies the tremendous social
responsibility that comes with advocating for change based on
scientific convictions. To make matters worse, while we all like to
think we are rational beings—scientists especially—we are not. We
are all at the mercy of our biases and beliefs, and all too often we
Inferno: Immoral Logic in the Age of Genetic Manipulation
that would help those people who would be affected if such an
eruption did occur.
interpret our science through the lens of these. And this means that
when an individual, no matter how smart they are, decides that they
have compelling evidence that demands costly and disruptive action,
there’s a reasonably good chance that they’ve missed something.
So how do we get out of this bind, where conscientious scientists
seem to be damned if they do, and damned if they don’t? The
one point of reasonable certainty here is that it’s dangerous for
an individual to push an agenda for change on their own. It’s just
too easy for someone to be blinded by what they believe is right
and true, and as a result miss ways forward that are more socially
responsible. At the same time, it’s irresponsible to suggest that
scientists should be seen and not heard, especially when they have
valuable insights into emerging risks and ways to avoid them.
One way forward is in collective advocacy. There’s a much greater
chance of a hundred scientists having a clear view of emerging
challenges and options than one lone genius. And in reality, this is
how science gets translated into action on many large issues. But
this does mean that experts need to be prepared to work together,
and to have the humility to accept that their personal ideas may
need to be reined in or modified for the common good. This is
where most experts are at with big issues like climate change and
vaccines. But there are many other socially important issues that
either don’t rise to the level of collective efforts from scientists, or
are still uncertain enough that there is not enough evidence for a
consensus to emerge. So, what are socially responsible scientists to
do in these cases?
In 2007, the scholar Roger Pielke Jr. grappled with some of these
challenges in his book The Honest Broker: Making Sense of Science
in Policy and Politics.[^167] Pielke was especially interested in how
science and scientists inform policy and operate within the political
arena. Because of this, his book takes quite a narrow view of
advocacy, particularly when it comes to exploring how scientists can
use policy advocacy to bring about change. But much of his analysis
is relevant to any scientist trying to thread the needle of remaining
true to their profession while acting as a responsible citizen.
Pielke astutely recognizes that there is no single best way that
scientists can translate what they know and what they believe to be
true into societally relevant action. Instead, taking his own advice,
he suggests that there are a range of possible options here, with
Pielke characterizes the Pure Scientist as someone simply interested
in generating new knowledge and placing it into a common
reservoir of information, which they leave to others to dip into and
use. In other words, they create a wall between themselves and
the society they live in, assuming that someone else may one day
find some use for what they do. If this sounds a little unrealistic,
it probably is. Even Pielke acknowledges that such scientists are
probably found more frequently in myth than in reality. Yet this
is a relatively common stereotype of scientists, certainly within
Western culture.
Pielke’s next category is the Science Arbiter. This, I suspect,
is where many scientists are the most comfortable. In Pielke’s
framework, Science Arbiters recognize that effective and socially
relevant decisions are made on good evidence and clear information
about the pros and cons of different options. Rather than having
an opinion on what is the right or the wrong decision, Science
Arbiters help ensure people have access to the science and
evidence they need to make the best possible decisions. There is
a twist here, though. Pielke also argues that, because people who
feel comfortable in this role have a deep belief in the scientific
process, they tend to focus on issues that they believe can be
resolved through science, while staying away from those that they
believe cannot.
Then there are scientists—for instance, those working in areas
driven by real-world challenges like health and sustainability—who
feel they cannot morally justify providing what seem to them to be
scientifically sound but socially hollow options to decision makers.
These, in Pielke’s terminology, are the Issue Advocates. They are
scientists on a mission to change the world, to fix what they see as
(mainly) social problems, and to use their science to the best of their
ability to do this. These are people who use science as a means to
an end, and are driven by their own beliefs and convictions. Zobrist
Inferno: Immoral Logic in the Age of Genetic Manipulation
four in particular standing out. These he refers to as four idealized
roles of science in policy and politics, but they apply equally well
to scientists trying to bring about what they consider to be positive
social change. The first of these roles is the Pure Scientist. This
is perhaps closest to the picture of the scientists I drew at the
beginning of this section, the person committed to objectivity and
evidence, who is seriously worried by the idea of making decisions
where there is only uncertainty.
would be considered by Pielke to be an Issue Advocate, as would, I
suspect, Paul Ehrlich.
And finally, there is the Honest Broker. This, in Pielke’s language, is
the person who actively engages with decision-makers to help them
see how science and evidence support (or don’t) the various options
that are open to them. This is the scientist who believes, more than
anything, in helping people make the best decision they can based
on the evidence, but who understands that, ultimately, they don’t
have the right to dictate which decision is made.
Pielke tries not to stand in judgment of the four ways he describes
scientists engaging with politics and policy. But it’s clear from his
writing that he’s a fan of the honest broker. And, to be honest, so
am I. This is the role I try to carve out for myself in my public-facing
work, trying not to judge others or advocate for a specific course
of action, but to help people make the best-informed decisions for
themselves and their communities, based on available evidence
and insights.
This is an approach that, to me, avoids mistaking personal values
for the “right” values, and respects deeply held beliefs and values
in others, even where you may disagree with them. It’s a path
toward empowering others while trying not to let your ego get in
the way. And with most of the issues I grapple with in my work, I’m
comfortable with it, because in most cases there are not bright-line
right or wrong answers.
This Honest Broker role extends to any situation where someone
with useful knowledge and insights is prepared to engage with
people who might benefit from them. Of course, sometimes people
will make decisions that lead to harm anyway. But how much more
tragic if these decisions are made simply because they were never
aware of the alternatives or the consequences. Yet, I’ll be the first
to admit that this role, while being rooted deeply in values that I
consider important, has its problems. And nowhere are they more
apparent than when issues of such moral peril arise that not to
advocate for a certain stance, or a particular way forward, ends up
becoming tacit support for not taking action.
To many, inaction on climate change and the use and proliferation
of nuclear weapons falls into this category, as does the rejection of
vaccines. These are issues where indecision or lack of advocacy has
a high chance of adversely impacting millions of people. In cases
like these, there is increasing pressure to shift from being an Honest
This is, of course, another sticky point, because as soon as an issue
becomes a focus of attention, the battles begin for whose “science”
is the most legitimate. As someone with leanings toward being an
Honest Broker, I would suggest that, where there is uncertainty
in the science (which is pretty much always—that’s the nature of
science), the weight of scientific evidence becomes critical. There are
always going to be multiple ways that science can be interpreted,
but some of these will most likely be more strongly supported by
the evidence than others. And here, nothing good ever comes from
simply selecting the science that supports your issue and rejecting
the science that doesn’t. This is a path to self-delusion, because,
at the end of the day, wishing something is true simply because it
supports what you believe doesn’t make it so.
But then, what do you do if the evidence seems to point toward
a looming catastrophe, and no one’s listening? This is where
charismatic voices like Paul Ehrlich’s arise. And it’s where, as a
society, we need to decide how to respond to what they preach.
In the case of Inferno, overpopulation is perceived as a looming
catastrophe that will result in misery and death for hundreds of
millions of people, unless radical action is taken. Zobrist sees this
and believes he has a solution. But, having been effectively outcast
by the scientific community for his radical ideas, he resorts to
drastic measures.
In the movie, Zobrist’s plan to cull half of the world’s population
through his genetically engineered virus is, of course, abhorrent.
This is what provides the dramatic tension that keeps us glued to
the screen, fueled by our moral outrage. But there’s an interesting
Inferno: Immoral Logic in the Age of Genetic Manipulation
Broker to an Issue Advocate. And yet, because of the dangers of
values and belief-driven short-sightedness, even in these cases, it’s
hard to justify one person being the sole arbiter of truth. Rather,
as Pielke argues, this is where we need institutions and sociallysanctioned organizations to act as the instruments of advocacy.
Pielke mentions groups like the National Academy of Sciences, and
by inference, similar organizations around the world. But I suspect
others would include advocacy groups here as well that are focused
on specific issues, yet recognize the importance of science in
advocating for action.
twist here, and it comes not from the movie, but the book that the
film’s based on.
Dan Brown’s book Inferno, like the movie, follows a crazy
countdown as Robert Langdon struggles to unravel the clues left by
Zobrist to the location of the virus. As in the movie, Zobrist believes
enough in the legitimacy of his actions that he’s willing to die rather
than give up his secrets. But then, as the location of the virus is
discovered, the book and the movie diverge quite dramatically.
In the book, Langdon and the WHO arrive too late. The virus has
been released, and has been infecting people for some time. But
surprisingly, no one is dying. It turns out that book-Zobrist didn’t
create a killer virus. Instead, he created a virus that rendered every
third person it infected sterile. What’s more, he ensured that this
“every third person” trait was heritable, meaning that, in every
subsequent generation, one in three people would also be sterile.
In the book, no one died as a result of Zobrist’s genetically modified
virus. Rather, he set in motion a chain of events that would
eventually lead to the Earth’s human population being reduced
to a manageable size. Instead of being the evil scientist intent on
murdering people, he emerges as a lone-genius savior of the future
of humankind.
This outcome intrigues me, as it supports the idea of the lone
visionary scientist as someone who can save the world. And it
suggests that they could probably do it better than a committee of
scientists, because they have a clarity of vision and purpose that a
large and unwieldy group would lack.
I’m pretty sure that the book version of Zobrist’s plan would have
had a profound and ultimately positive impact on the Earth’s human
population. It may also have led to an improved quality of life for
many people, although, humans being humans, there’s also the
chance of self-interest and ignorance putting paid to this possibility.
Yet despite its superficial elegance, something worries me about
the idea of imposing sterility on a third of the world’s population
in the name of social good, and this is the lack of choice that
Zobrist’s victims had. For sure, he “saved” society in the book. But
in doing so, did he end up betraying the individuals that make up
that society?
This is a particularly knotty and ultimately unresolvable moral
question, as it comes down to weighing the good of the many
We’re also still left with the problem that, no matter how much we
delude ourselves, we cannot predict the future. Which means that,
compelling as book-Zobrist’s case was, he had no way of knowing
whether he needlessly condemned a third of the world’s population
to sterility. This was a gamble he was willing to take. But what gave
him the right to take this gamble in the first place? Not the people
whose futures he was playing with, that’s for sure. And this is
ultimately where the challenge lies when it comes to lone scientistadvocates and genius-activists. No matter how compelling their
vision of the future, or how persuasive their solutions to making it
better, where do they get the right to act unilaterally on issues that
ultimately impact us all?
Some, I suspect, would argue that time and necessity are on their
side. I would counter that these are not excuses for preventing
people who are likely to be affected by major decisions from having
a say in their collective future. This, though, means that we need
better ways of making collective decisions as a society (as was seen
in chapter ten and The Man in the White Suit), especially where
technological innovation is both pushing us toward potentially
catastrophic futures and yet is potentially part of the solution to
avoiding such futures. And we need to get better at making such
collective decisions fast, because if there’s one thing that these lone
scientist-advocates have right in many cases, it’s that time is short!
And nowhere is this more apparent than with an issue that’s tightly
coupled to a burgeoning human population: climate change.
Inferno: Immoral Logic in the Age of Genetic Manipulation
against the good of the few. The book version of Zobrist violates
basic human rights by dictating the fate of people infected by his
virus. And I doubt that this would have been a bloodless violation;
while indiscriminate sterilization may seem a small price to pay for
averting world hunger, try telling that to someone desperate for
children who has been robbed of the opportunity, or someone who
depends on growing a family to sustain their livelihood.
[^156]: Bernard Dixon (1971) “In Praise of Prophets.” New Scientist, 16 September 1971, page 606.
[^157]: Sander Herfst and colleagues (2012) “Airborne Transmission of Influenza A/H5N1 Virus Between Ferrets” Science, 336 (6088) pp 1534-1541 http://doi.org/10.1126/science.1213362
[^158]: Masaki Imai and colleagues (2012) “Experimental adaptation of an influenza H5 HA confers respiratory droplet transmission to a reassortant H5 HA/H1N1 virus in ferrets” Nature 486, pp 420–428 http://doi.org/10.1038/nature10831
[^159]: Jeffery K. Taubenberger and David M. Morens (2006) “1918 Influenza: the Mother of All Pandemics“. Emerging Infectious Diseases volume 12, number 1, pages 15-22 https://doi.org/10.3201/eid1201.050979
[^160]: Jocelyn Kaiser (2014) “Lab incidents lead to safety crackdown at CDC.” Published in Science Magazine, July 11, 2014. http://www.sciencemag.org/news/2014/07/lab-incidents-lead-safety-crackdown-cdc
[^161]: Ed Yong (2012) “The risks and benefits of publishing mutant flu studies.” Nature News, March 2, 2012 http://doi.org/10.1038/nature.2012.10138
[^162]: Cambridge Working Group Consensus Statement on the Creation of Potential Pandemic Pathogens (PPPs). http://www.cambridgeworkinggroup.org/
[^163]: Press Statement on the NSABB Review of H5N1 Research, December 20, 2011. https://web.archive.org/web/20160407031930/https://www.nih.gov/news-events/news-releases/press-statement-nsabb-review-h5n1-research
[^164]: Heidi Ledford (2012) “Call to censor flu studies draws fire.” Published in Nature News January 3, 2012. http://doi.org/10.1038/481009a
[^165]: March 29-30, 2012 Meeting of the National Science Advisory Board for Biosecurity to Review Revised Manuscripts on Transmissibility of A/H5N1 Influenza Virus. Statement of the NSABB: https://web.archive.org/web/20190214205704/http://www.virology.ws/NSABB_statement_march_2012.pdf
[^166]: Ehrlich, P. (1968). “The Population Bomb.” Sierra Club/Ballantine Books.
[^167]: Roger A. Pielke Jr. (2007). “The Honest Broker: Making Sense of Science in Policy and Politics” Published by Cambridge University Press.