Chapter 9: Transcendence — Welcome to the Singularity

From Films from the Future: The Technology and Morality of Sci-Fi Movies by Andrew Maynard


“You know what the computer did when he first

turned it on? It screamed.”

—Bree Evans

Visions of the Future

In 2005, the celebrated futurist Ray Kurzweil made a bold

prediction: In 2045, machines will be so smart that they’ll be capable

of reinventing ever-more-powerful versions of themselves, resulting

in a runaway acceleration in machine intelligence that far outstrips

what humans are capable of.[^120] Kurzweil called this the “singularity,”

a profound, disruptive, and rapid technological transformation of

the world we live in, marking the transition between a humandominated civilization and one dominated by smart machines.

To Kurzweil, artificial intelligence like that explored in chapter eight

and the movie Ex Machina is simply a stepping stone to the next

phase of human evolution. In his 2005 book The Singularity is Near,

he envisaged a future where deep convergence between different

areas of innovation begins to massively accelerate our technological

capabilities. His projections are based in part on an exponential

growth in technological progress that appears to be happening

across the board, such as in the plummeting cost and speed of

sequencing DNA, the continuing growth in computing power, and

massive increases in data storage density and the resolution of

non-invasive brain scans. They’re also based on the assumption

that these trends will not only continue, but accelerate. The result,

he claims, will be a transformative change in not only what we can

do with technology, but how increasingly advanced technologies

becomes deeply integrated into the future of life as we know it.[^121]

This, to Kurzweil, is the singularity. It’s a bright point in the nottoo-distant future, beyond which we cannot predict the outcomes

of our technological inventiveness, because they are so far beyond

our current understanding. And it’s the imagined events leading up

to and beyond such a technological transition point that the movie

Transcendence draws on.

To be honest, I must confess that I’m skeptical of such a

technological tipping point occurring in our near future. There’s

enough hand-waving and speculation here to make me deeply

suspicious of predictions of the pending singularity. What I do

buy into, though, is the idea of rapidly developing, converging,

and intertwining technologies leading to a technologically-driven

future that is increasingly hard to predict and control. And this

makes Transcendence, Hollywood hyped-up techno-fantasy aside,

a worthwhile starting point for imagining what could happen as we

begin to push the boundaries of the technologically possible beyond

our comprehension.

Transcendence revolves around Will Caster (played by Johnny

Depp), a visionary artificial-intelligence scientist at the University

of California, Berkeley, and his equally smart wife, Evelyn (Rebecca

Hall). The movie starts with Will presenting his work to a rapt

audience. With most of the room hanging on his every word, he

weaves a seductive narrative around the promise of AI solving the

world’s most pressing challenges.

Will’s lecture is one of unbounded optimism in the ingenuity of

humans and the power of AI. Yet, at the end of his presentation, one

member of the audience aggressively accuses him of trying to create

God. Will, it seems, is treading on sacred ground, and some people

are getting worried that he’s going too far. We quickly learn that

Will’s questioner is a member of an anti-technology activist group

calling itself Revolutionary Independence From Technology, or RIFT,

and his presence in the lecture is part of a coordinated attack on AI

researchers. As Will leaves the lecture, he’s shot and wounded by

this techno-activist. At the same time, a bomb goes off elsewhere,

In a mad dash to transcend his pending death, Will, Evelyn, and

their colleague and friend Max Waters (Paul Bettany) set up a secret

research lab. Here, they attempt to upload Will’s neural pathways

into a powerful AI-based supercomputer before his body gives way

and dies. As Will passes away, it looks like they’ve failed, until the

computer containing his mind-state begins to communicate.

It turns out that some part of Will has survived the transition, and

the resulting cyber-Will quickly begins to reconfigure the code and

algorithms that now define his environment. But members of RIFT,

worried about the consequences of what Will is doing, track down

the secret lab and plan a raid to put an end to what’s going on.

Even as they descend on the lab, though, Evelyn connects cyberWill to the web in an attempt to escape the activists, and he uploads

himself to the internet.

In the days and weeks that follow, cyber-Will and Evelyn establish

a powerful computing facility in the remote town of Brightwood.

This is financed using funds that cyber-Will, flexing his new cybermuscles, siphons off from the stock market. Armed with nearlimitless resources and an exponentially growing intelligence,

cyber-Will begins to make rapid and profound technological

breakthroughs, including harnessing a Hollywood version of

nanotechnology to create self-replicating “nanobots” that use the

materials around them to manufacture anything they are instructed

to, atom by atom.

In the meantime, members of RIFT kidnap Max and try to turn

him in their efforts to stop cyber-Will. Max, it turns out, previously

wrote a paper on the dangers of AI which has become something

of a guiding document for the techno-activists. Max initially resists

RIFT’s efforts, but he gradually begins to see that cyber-Will presents

a threat that has to be stopped. At the same time, another brilliant

AI scientist and former colleague of Will’s, Joseph Taggart (Morgan

Freeman), has teamed up with FBI Agent Buchanan (Cillian Murphy)

to track down cyber-Will and Evelyn. As cyber-Will’s powers grow,

Buchanan and Taggart join forces with Max and RIFT’s leader Bree

(Kate Mara) to take cyber-Will down.

in a lab where experiments are being conducted into uploading the

brain-states of monkeys into computers. Will survives the attack. But

the bullet that hits him is laced with radioactive polonium, leading

to irreversible and fatal poisoning.

This loose coalition of allies soon realize there is an increased

urgency to their mission. Using his growing intelligence, cyber-Will

has cracked not only how to create nanobots, but how to use them

to reconstruct precisely damaged tissues and cells, and to “upgrade”

living people. In a scene with rather God-like overtones, we see a

local resident who’s been blind from birth having their optic nerve

cells repaired, and being given the gift of sight.[^122] Cyber-Will starts

to cure and upgrade the local townspeople, but it turns out that his

altruistic “fix-it” health service also allows him to take control of

those he’s altered.

As cyber-Will extends his control over the local population, Max

and Taggart work out that they can bypass his defenses if he

can be persuaded to upgrade and assimilate someone carrying

a targeted cyber-virus. But there’s a catch. Because cyber-Will is

now distributed through the internet, taking him down will also

take down every web-enabled system around the world. Anything

that depends on the internet—finance, power, food distribution,

healthcare, and many other essential systems—would be disabled.

As a result, the anti-Will alliance faces a tough tradeoff: Allow cyberWill to grow in power and potentially take over the world, or shut

him down, and lose virtually every aspect of modern life that people

rely on.

The team decides to go for the nuclear option and shut cyber-Will

down. But they still need to work out how to deliver the virus.

Up to this point, Evelyn has been a willing partner in cyberWill’s growing empire. She’s not sure whether this is the Will she

previously knew, or some new entity masquerading as him, but she

sticks with him nevertheless. Yet, as cyber-Will’s power grows, Max

convinces Evelyn that this is not the Will she married. And the crux

of his argument is that, unlike cyber-Will, human-Will never wanted

to change the world. This was Evelyn’s vision, not his.

Evelyn becomes convinced that cyber-Will needs to be stopped, and

agrees to become a carrier for virus. To succeed, though, she needs

to persuade Will to assimilate her and make her a part of the cyber

world he’s creating.

Not surprisingly, cyber-Will knows what’s going on. But there’s

a twist. Everything he’s done has been motivated by his love for

Despite Will’s love for Evelyn, he’s not going to let himself be tricked

into being infected. Yet, as Evelyn approaches him, she’s fatally

wounded in an attack on the cyber facility, leaving cyber-Will with

an impossible choice: save Evelyn, but in doing so become infected,

or let her die, and lose the one thing he cares about the most.

Cyber-Will choses love and self-sacrifice over power, and as the

virus enters him, his systems begin to shut down. As it takes hold,

internet-connected systems around the world begin to fail.

At least, this is how it looks. What cyber-Will’s adversaries don’t

know is that he has transcended the rather clunky world of the

internet, and he’s taken a cyber-form of Evelyn with him. As he

assimilates her, he uploads them both into an invisible network

of cyber-connected nanobots. Together, they step beyond their

biological and evolutionary limits into a brave new future.

On one level, Transcendence takes us deep into technological

fantasyland. Yet the movie’s themes of technological convergence,

radical disruption, and anti-tech activism are all highly relevant to

the future we’re building and how it’s impacted by the technologies

we create.

Technological Convergence

According to World Economic Forum founder Klaus Schwab, we

are well into a “Fourth Industrial Revolution.”[^123] The first Industrial

Revolution, according to Schwab, was spurred by the use of water

power and steam to mechanize production. The second took off

with the widespread use of electricity. And the third was ushered

in with the digital revolution of the mid- to late twentieth century.

Now, argues Schwab, digital, biological, and physical technologies

are beginning to fuse together, to transform how and what we

manufacture and how we live our lives. And while this may sound

Evelyn. She wanted to change the world, and through his newfound

powers, cyber-Will found a way to do this for her. Using his

nanobots, he discovered ways to reverse the ravages of humans on

the environment, and take the planet back to a more pristine state.

a little Hollywood-esque, it’s worth remembering that the World

Economic Forum is a highly respected global organization that

works closely with many of the world’s top movers and shakers.

At the heart of this new Industrial Revolution is an increasing

convergence between technological capabilities that is blurring

the lines between biology, digital systems, and the physical and

mechanical world. Of course, technological convergence is nothing

new. Most of the technologies we rely on every day depend to some

degree on a fusion between different capabilities. Yet, over the past

two decades, there’s been a rapid acceleration in what is possible

that’s been driven by a powerful new wave of convergence.

Early indications of this new wave emerged in the 1970s as the

fields of computing and robotics began to intertwine. This was a nobrainer of a convergence, as it became increasingly easy to control

mechanical systems using computer “brains.” But it was a growing

trend in convergence between material science, genetics, and

neuroscience, and their confluence with cyber-systems and robotics,

that really began to accelerate the pace of change.

Some of this was captured in a 2003 report on converging

technologies co-edited by Mike Roco and Bill Bainbridge at the

US National Science Foundation.[^124] Working with leading scientists

and engineers, they explored how a number of trends were leading

to a “confluence of technologies that now offers the promise of

improving human lives in many ways, and the realignment of

traditional disciplinary boundaries that will be needed to realize this

potential.” And at this confluence they saw four trends as dominating

the field: nanotechnology, biotechnology, information technology,

and cognitive technology.

Roco, Bainbridge, and others argued that it’s at the intersections

between technologies that novel and disruptive things begin to

happen, especially when it occurs between technologies that allow

us to control the physical world (nanotechnology), biological

systems (biotechnology), the mind (cognitive technologies), and

cyberspace (specifically, information technologies). And they had

a point. Where these four technological domains come together,

really interesting things start to happen. For instance, scientists

and technologists can begin to use nanotechnology to build more

These confluences just begin to hint at the potential embedded

within the current wave of technological convergence. What Roco

and Bainbridge revealed is that we’re facing a step-change in how

we use science and technology to alter the world around us. But

their focus on nano, bio, info, and cognitive technologies only

scratched the surface of the transformative changes that are now

beginning to emerge.

To understand why we’re at such a transformative point in our

technological history, it’s worth pausing to look at how our

technological skills are growing in how we work with the most

fundamental and basic building blocks of the things we make and

use; starting with digital systems, and extending out to the materials

and products we use and the biological systems we work with.

The advent of digital technologies and modern computers brought

about a major change in what we can achieve, and it’s one that we’re

only just beginning to fully appreciate the significance of. Of course,

it’s easy to chart the more obvious impacts of the digital revolution

on our lives, including the widespread use of smart phones and

social media. But there’s an underlying trend that far exceeds many

of the more obvious benefits of digital devices and systems, and this,

as we saw in chapter seven and Ghost in the Shell, is the creation

of a completely new dimension that we are now operating in:

cyberspace.

Cyberspace is a domain where, through the code we write, we have

control over the most fundamental rules and instructions that govern

it. We may not always be able to determine or understand the full

implications of what we do, but we have the power to write and edit

the code that ultimately defines everything that happens here.

The code that most cyber-systems currently rely on is made up of

basic building blocks of digital computing, the ones and zeroes

of binary, and the bits and bytes that they’re a part of. Working

powerful computers, or to read DNA sequences faster, or build

better machine-brain interfaces. Information technology can be used

to design new materials, or to engineer novel genetic sequences and

interpret brain signals. Biotechnology can be, and is being, used to

make new materials, to translate digital code into genetic code, and

to precisely control neurons. And neurotechnology is inspiring a

whole new generation of computer processors.

with these provides startling insight into what we might achieve if

we could, in a similar way, write and edit the code that underlies

the physical world we inhabit. And this is precisely what we

are beginning to do with biological systems, although, as we’re

discovering, coding biology using DNA is fiendishly complicated.

Unlike the world of cyber, we had no say in designing the

underlying code of biology, and as a result we’re having to work

hard to understand it. Here, rather than ones and zeroes of digital

code, the fundamental building blocks are the four bases that make

up DNA: adenine, guanine, cytosine, and thymine. This language of

DNA is deeply complex, and we’re still a long way from being close

to mastering it. But the more we learn, the closer we’re getting to

being able to design and engineer biological systems with the same

degree of finesse we can achieve in cyberspace.

Thinking about coding biology in the same way we code apps and

other cyber-systems is somewhat intuitive. There is, however, a

third domain where we are effectively learning to rewrite the “base

code,” and this is the physical world of materials and machines.

Here, the equivalent fundamental building blocks—the base

code—are the atoms and molecules that everything is made of.

Just as we’ve experienced a revolution in our understanding of

biology over the past century, we’ve also seen a parallel revolution

in understanding how the arrangement and types of atoms and

molecules in materials determines their behavior. These are the

physical world’s equivalent of the “bits” of cyber code, and the

“bases” of biological code, and, with our emerging mastery of this

base code of atoms and molecules, we’re transforming how we can

design and engineer the material world around us. Naturally, as

with DNA, we’re still constrained by the laws of physics as we work

with atoms and molecules. We cannot create materials that defy the

laws of the nature, for instance, or that take on magical properties.

But we can start to design and create materials, and even machines,

that go far beyond what has previously occurred through natural

processes alone.

Here, our growing mastery of the base code in each of these three

domains is transforming how we design and mold the world around

us. And it’s this that is making the current technological revolution

look and feel very different from anything that’s come before it. But

we’re also learning how to cross-code between these base codes, to

mix and match what we do with bits, bases, and atoms to generate

new technological capabilities. And it’s this convergence that is

radically transforming our emerging technological capabilities.

Endy wasn’t the first to coin the term synthetic biology.[^126] But he

was one of the first to introduce ideas to biological design like

standardized parts, modularization, and “black-boxing” (essentially

designing biological modules where a designer doesn’t need to

know how a module works, just what it does). And in doing so,

he helped establish an ongoing trend in applying non-biological

thinking to biology.

This convergence between biology and engineering is already

leading to a growing library of “bio bricks,” or standardized

biological components that, just like Lego bricks or electronic

components, can be used to build increasingly complex biological

“circuits” and devices. The power of bio bricks is that engineers can

systematically build biological systems that are designed to carry out

specific functions without necessarily understanding the intricacies

of the underlying biology. It’s a bit like being able to create the

Millennium Falcon out of Legos without needing to understand the

chemistry behind the individual bricks, or successfully constructing

your own computer with no knowledge of the underlying solid-state

physics. In the same way, scientists and engineers are using bio

bricks to build organisms that are capable of producing powerful

medicines, or signaling the presence of toxins, or even transforming

pollutants into useful substances.

Perhaps not surprisingly given its audacity, Endy’s vision of synthetic

biology isn’t universally accepted, and there are many scientists

who still feel that biology is simply too complex to be treated like

Legos or electronic components. Despite this, the ideas of Drew

To get a sense of just how powerful this idea of “cross-coding” is,

it’s worth taking a look at what is often referred to as “synthetic

biology”—a technology trend we touched on briefly in chapter

two and Jurassic Park. In 2005, the scientist and engineer Drew

Endy posed a seemingly simple question: Why can’t we design and

engineer biological systems using DNA coding in the same way

we design and engineer electronic devices?[^125] His thinking was

that, complex as biology is, if we could break it down into more

manageable components and modules, like electrical, computer, and

mechanical engineers do with their systems, we could transform

how “biological” products are designed and engineered.

Endy and others are already transforming how biological systems

and organisms are being designed. To get a flavor of this, you need

look no further than the annual International Genetically Engineered

Machine competition, or iGEM for short.[^127]

Every year, teams from around the world compete in iGEM, many

of them made up of undergraduates and high school students

with very diverse backgrounds and interests. Many of these teams

produce genetically modified organisms that are designed to behave

in specific ways, all using biological circuits built with bio-bricks. In

2016, for instance, winning teams modified E. coli bacteria to detect

toxins in Chinese medicine, engineered a bacterium to selectively

kill a parasitic mite that kills bees, and altered a bacterium to

indicate the freshness of fruit by changing color. These, and many of

the other competition entries, provide sometimes-startling insights

into what can be achieved when innovative teams of people start

treating biology as just another branch of engineering. But they

also reflect how cross-coding between biology and cyberspace is

changing our very expectations of what’s possible when working

with biology.

To better understand this, it’s necessary to go back to the idea of

DNA being part of the base code of all living things. As a species,

we’ve been coding in this base code for thousands of years,

albeit crudely, through selective breeding. More recently, we’ve

learned how to alter this code through brute force, by physically

bombarding cells with edited strands of DNA, or designing viruses

that can deliver a payload of modified genetic material. But, until

just a few years ago, this biological coding was largely limited to

working directly with physical materials. Yet, as the cost and ease of

DNA sequencing has plummeted, all of this has changed. Scientists

can now quickly and (relatively) cheaply read the DNA base code

of complete organisms and upload them to cyberspace. Once

there, they can start to redesign and experiment with this code,

manipulating it in much in the same way as we’ve learned how to

work with digitized photos and video.

This is a big deal, as it allows scientists and engineers to experiment

with and redesign DNA-based code in ways that were impossible

until quite recently. As well as tweaking or redesigning existing

organisms, this is allowing them to discover how to make DNA

In the past few years, it’s become increasingly easy to synthesize

sequences of DNA from computer-based code. You can even mailorder vials of DNA that have been constructed to your precise

specifications, and have them delivered to your home or lab in a

matter of days. In other words, scientists, engineers, and, in fact,

pretty much anyone who puts their mind to it can upload genetic

code into cyberspace, digitally alter it, then download it into back

into the physical world, and into real, living organisms. This is all

possible because of our growing ability to cross-code between

biology and cyberspace.

It doesn’t take much imagination to see what a step-change in our

technological capabilities cross-coding like this may bring about.

And it’s not confined to biology and computers; cross-coding is also

happening between biology and materials, between materials and

cyberspace, and at the nexus of all three domains. This is powerful

and transformative science and technology. Yet with this emerging

mastery of the world we live in, there’s perhaps a greater likelihood

than ever of us making serious and irreversible mistakes. And this

is where technological convergence comes hand in hand with an

urgent need to understand and navigate the potential impacts of our

newfound capabilities, before it’s too late.

Enter the Neo-Luddites

On January 15, 1813, fourteen men were hanged outside York Castle

in England for crimes associated with technological activism. It was

the largest number of people ever hanged in a single day at the

castle.

These hangings were a decisive move against an uprising protesting

the impacts of increased mechanization, one that became known as

the Luddite movement after its alleged leader, Ned Ludd.

It’s still unclear whether Ned Ludd was a real person, or a

conveniently manufactured figurehead. Either way, the Luddite

movement of early-nineteenth-century England was real, and it was

bloody. England in the late 1700s and early 1800s was undergoing

behave in ways that have never previously occurred in nature. It’s

even opening the door to training AI-based systems how to code

using DNA. But this is only half of the story. The other half comes

with the increasing ability of scientists to not only read DNA

sequences into cyberspace, but to write modified genetic code back

into the real world.

a scientific and technological transformation. At the tail end of the

Age of Enlightenment, entrepreneurs were beginning to combine

technologies in powerful new ways to transform how energy was

harnessed, how new materials were made, how products were

manufactured, and how goods were transported. Much like today, it

was a time of dramatic technological and social change. The ability

to use new knowledge and to exploit materials in new ways was

increasing at breakneck speed. And those surfing the wave found

themselves on an exhilarating ride into the future.

But there were casualties, not least among those who began to see

their skills superseded and their livelihoods trashed in the name

of progress.

In the 1800s, one of the more prominent industries in the English

Midlands was using knitting frames to make garments and cloth out

of wool and cotton. Using these manual machines was a sustaining

business for tens of thousands of people. It didn’t make them rich,

but it was a living. By some accounts, there were around 30,000

knitting frames in England at the turn of the century—25,000 of

them in the Midlands—serving the cloth and clothing needs of

the country.

As the first Industrial Revolution gathered steam, though, mass

production began to push out these manual-labor-intensive

professions, and knitting frames were increasingly displaced by

steam-powered industrial mills. Faced with poverty, and in a fight

for their livelihoods, a growing number of workers turned to direct

action and began smashing the machines that were replacing them.

From historical records, they weren’t opposed to the technology so

much as how it was being used to profit others at their expense.

The earliest records of machine smashing began in 1811, but

escalated rapidly as the threat of industrialization loomed. In

response, the British government passed the “Destruction of

Stocking Frames, etc. Act 1812” (also known as the Frame Breaking

Act), which allowed for those found guilty of breaking stocking or

lace frames to face transportation to remote colonies, or even the

death penalty.

Galvanized by the Act, the Luddite movement escalated, culminating

in the murder of mill owner William Horsfall in 1812, and the

hanging of seventeen Luddites and transportation of seven

more. It marked a turning point in the conflict between Luddites

and industrialization, and by 1816 the movement had largely

Back in 2009, I asked a number of friends and colleagues working

in civil-society organizations to contribute to a series of articles for

the blog 2020 Science.[^128] I was very familiar with the sometimes

critical stances that some of these colleagues took on advances in

science and technology, and I wanted to get a better understanding

of how they saw the emerging relationship between society

and innovation.

One of my contributors was Jim Thomas, from the environmental

action group ETC. I’d known Jim for some time, and was familiar

with the highly critical position he sometimes took on emerging

technologies, and I was intrigued to know more about what drove

him and some of his group’s members.

Jim’s piece started out, quite cleverly, I thought, with, “I should

admit right now that I’m a big fan of the Luddites.”[^129] He went

on to describe a movement that was inspired, not by a distrust of

technology, but by a desire to maintain fair working conditions.

Jim’s article provides a nuanced perspective on Luddism that is often

lost as accusations of being a Luddite (or neo-Luddite) are thrown

around. And it’s one that, I must confess, I have rather a soft spot

for. So much so that, when Elon Musk, Bill Gates, and Stephen

Hawking were nominated for the annual Luddite award, I countered

with an article titled “If Elon Musk is a Luddite, count me in!”[^130]

dissipated. Yet the name Luddite lives on as an epithet thrown at

people who seemingly stand in the way of technological progress,

including those who dare to ask if we are marching blindly into

technological risks that, with some forethought, could be avoided.

These, according to the narratives that emerge around technological

innovation, are the new Luddites, or “neo-Luddites.” This is usually a

term of derision and censorship that has a tendency to be attached

to individuals and groups who appear to oppose technological

progress. Yet the history of the Luddite movement suggests that the

term carries with it a lot more nuance than is sometimes apparent.

Despite the actions and the violence that were associated with their

movement (on both sides), the Luddites were not fighting against

technology, but against its socially discriminatory and unjust use.

These were people who had embraced a previous technology that

not only gave them a living, but also provided their peers with an

important commodity. They were understandably upset when, in the

name of progress, wealthy industrialists started to take away their

livelihood to line their own pockets.

The Luddites fought hard for their jobs and their way of life. More

than this, though, the movement forced a public dialogue around

the broader social risks of indiscriminate technological innovation

and, in the process, got people thinking about what it meant to be

socially responsible as new technologies were developed and used.

Ultimately, the movement failed. As society embraced technological

change, the way was paved for major advances in manufacturing

capabilities. Yet, as the Luddite movement foreshadowed, there were

casualties on the way, often among communities who didn’t have

the political or social agency to resist being used and abused. And,

as was seen in chapter six and the movie Elysium, we’re still seeing

these casualties, as new technologies drive a wedge between those

who benefit from them and those who suffer as a consequence

of them.

These wedges are often complex. For instance, the gig economy

that’s emerging around companies like Uber, Lyft, and Airbnb

is enabling people to make more money in new ways, but it’s

also leading to discrimination and worker abuse in some cases,

as well as elevating the stress of job insecurity. A whole raft of

innovations, from advanced manufacturing to artificial intelligence,

are threatening to completely redraw the job landscape. These

and other advances present real and serious threats to people’s

livelihoods. In many cases, they also threaten deeply held beliefs

and worldviews, and force people to confront a future where they

feel less comfortable and more vulnerable. As a result, there is, in

some quarters, a palpable backlash against technological innovation,

as people protect what’s important to them. Many of these people

would probably not consider themselves Luddites. But I suspect

plenty of them would be sympathetic to smashing the machines and

the technologies that they feel threaten them.

This anti-technology sentiment seems to be gaining ground in

some areas, and it’s easy to see why someone who’s unaware of

Techno-Terrorism

Between 1978 and 1995, three people were killed and twenty-three

others injured in terrorist attacks by one of the most extreme antitechnology activists of modern times. Ted Kaczynski—also known

as the Unabomber131—conducted a reign of terror through targeting

academics and airlines with home-made bombs, until his arrest

in 1996. His issue? He fervently believed that we’ve lost our way

as a society with our increasing reliance on, and subservience to,

technology.

Watch or read enough science fiction, and you’d be forgiven for

thinking that techno-terrorism is a major threat in today’s society,

and that groups like Transcendence’s RIFT are an increasingly likely

phenomenon. Despite this, though, it’s remarkably hard to find

evidence for widespread techno-terrorism in real life. Yet, dig deep

enough, and small but worrying pockets of violent resistance against

technological progress do begin to surface, often closely allied to

techno-terrorism’s close cousin, eco-terrorism.

In 2002, James F. Jarboe, then Domestic Terrorism Section Chief

of the FBI’s Counterterrorism Division, testified before a House

subcommittee on the emerging threats of eco-terrorism.[^132] In his

testimony, he identified the Animal Liberation Front (ALF) and Earth

Liberation Front (ELF) as serious terrorist threats, and claimed they

were responsible at the time for “more than 600 criminal acts in the

the roots of the Luddite movement might derisively brand people

who represent it as neo-Luddites. Yet this is a misplaced branding,

as the true legacy of Ned Ludd’s movement is not about rejecting

technology, but ensuring that new technologies are developed

for the benefit of all, not just a privileged few. This is a narrative

that Transcendence explores through the tension between Will’s

accelerating technological control and RIFT’s social activism,

one that echoes aspects of the Luddite movement. But there are

also differences between this tale of technological resistance and

the events from two hundred years ago that inspired it, that are

reminiscent of more recent concerns around direct action, and

techno-terrorism in particular.

United States since 1996, resulting in damages in excess of fortythree million dollars.” But no deaths.

Jarboe’s testimony traces the recent history of eco-terrorism back

to the Sea Shepherd Conservation Society, a disaffected faction

of the environmental activist group Greenpeace that formed in

the 1970s. Then, in the 1980s, a new direct-action group, Earth

First, came to prominence, spurred by Rachel Carson’s 1962 book

Silent Spring and a growing disaffection with ineffective protests

against the ravages of industrialization. Earth First were known

for their unpleasant habit of inserting metal or ceramic spikes into

trees scheduled to be cut for lumber, leaving a rather nasty, and

potentially fatal, surprise for those felling or milling them. In the

1990s, members of Earth First formed the group ELF and switched

tactics to destroying property using timed incendiary devices.[^133]

Groups such as ELF and Earth First, together with their underlying

concerns over the potentially harmful impacts of technological

innovation, clearly provide some of the inspiration for RIFT.

Yet, beyond the activities of these two groups, which have been

predominantly aimed at combatting environmental harm rather than

resisting technological change, it’s surprisingly hard to find examples

of substantial and coordinated techno-terrorism. Today’s Luddites, it

seems, are more comfortable breaking metaphorical machines from

the safety of their academic ivory towers rather than wreaking havoc

in the real world. Yet there are still a small number of individuals

and groups who are motivated to harm others in their fight against

emerging technologies and the risks they believe they represent.

On August 8, 2011, Armando Herrera Corral, a computer scientist

at the Monterrey Institute of Technology and Higher Education in

Mexico City, received an unusual package. Being slightly wary of it,

he asked his colleague Alejandro Aceves López to help him open it.

In opening the package, Aceves set off an enclosed pipe bomb, and

metal shards ejected by the device pierced his chest. He survived,

but had to be rushed to intensive care. Herrera got away with burns

to his legs and two burst eardrums.

ITS justified its actions through a series of communiques, the final

one being released in March 2014, following an article on the

group’s activities published by the scholar Chris Toumey.[^136] Reading

the communique they released the day after the August 8 bombing,

what emerges is a distorted vision of nanotechnology that, to them,

justified short-term violence to steer society away from imagined

existential risks. At the heart of these concerns was their fear of

nanotechnologies creating “nanomachines” that would end up

destroying the Earth.

ITS’ “nanomachines” are remarkably similar to the nanobots seen

in Transcendence. Just to be clear, these do not present a plausible

or rational risk, as we’ll get to shortly. Yet it’s easy to see how these

activists twisted together the speculative musings of scientists, along

with a fractured understanding of reality, to justify their deeply

misguided actions.

In articulating their concerns, ITS drew on a highly influential

essay, published in Wired magazine in 2000, by Sun Microsystems

founder Bill Joy. Joy’s article was published under the title “Why the

future doesn’t need us,”[^137] and in it he explores his worries that the

technological capabilities being developed at the time were on the

cusp of getting seriously out of hand—including his concerns over a

hypothetical “gray goo” of out-of-control nanobots first suggested by

futurist and engineer Eric Drexler.

Joy’s concerns clearly resonated with ITC, and somehow, in the

minds of the activists, these concerns translated into an imperative

to carry out direct action against nanotechnologists in an attempt

The package was from a self-styled techno-terrorist group calling

itself Individuals Tending Towards the Wild, or Individuals Tending

toward Savagery (ITS), depending on how the Spanish is translated.[^134]

ITS had set its sights on combating advances in nanotechnology

through direct and violent action, and was responsible for two

previous bombing attempts, both in Mexico.[^135]

to save future generations. This was somewhat ironic, given Joy’s

clear abhorrence of violent action against technologists. Yet, despite

this, Joy’s speculation over the specter of “gray goo” was part of the

inspiration behind ITC’s actions.

Beyond gray goo though, there exists another intriguing connection

between Joy and ITC. In his essay, Joy cited a passage from Ray

Kurzweil’s book The Age of Spiritual Machines that troubled him,

and it’s worth reproducing part of that passage here:

“First let us postulate that the computer scientists succeed in

developing intelligent machines that can do all things better

than human beings can do them. In that case presumably

all work will be done by vast, highly organized systems of

machines and no human effort will be necessary. Either of two

cases might occur. The machines might be permitted to make all

of their own decisions without human oversight, or else human

control over the machines might be retained.

“If the machines are permitted to make all their own decisions,

we can’t make any conjectures as to the results, because it is

impossible to guess how such machines might behave. We

only point out that the fate of the human race would be at the

mercy of the machines. It might be argued that the human race

would never be foolish enough to hand over all the power to

the machines. But we are suggesting neither that the human

race would voluntarily turn power over to the machines nor that

the machines would willfully seize power. What we do suggest

is that the human race might easily permit itself to drift into

a position of such dependence on the machines that it would

have no practical choice but to accept all of the machines’

decisions. As society and the problems that face it become

more and more complex and machines become more and

more intelligent, people will let machines make more of their

decisions for them, simply because machine-made decisions will

bring better results than manmade ones. Eventually a stage may

be reached at which the decisions necessary to keep the system

running will be so complex that human beings will be incapable

of making them intelligently. At that stage the machines will

be in effective control. People won’t be able to just turn the

machines off, because they will be so dependent on them that

turning them off would amount to suicide.”

Joy was conflicted. As he writes, “Kaczynski’s actions were

murderous and, in my view, criminally insane. …But simply

saying this does not dismiss his argument; as difficult as it is for

me to acknowledge, I saw some merit in the reasoning in this

single passage.”

Joy worked through his concerns with reason and humility, carving

out a message that innovation can be positively transformative, but

only if we handle the power of emerging technologies with great

respect and responsibility. Yet ITS took his words out of context, and

saw his begrudging respect for Kaczynski’s arguments as validation

of their own ideas.

The passage above that was cited by Kurzweil, and then by Joy,

comes from Kaczynski’s thirty-five-thousand-word manifesto[^138],

published in 1995 by the Washington Post and the New York Times.

Since its publication, this manifesto has become an intriguing

touchstone for action against perceived irresponsible (and

permissionless) technology innovation. Some of its messages have

resonated deeply with technologists like Kurzweil, Joy, and others,

and have led to deep introspection around what socially responsible

technology innovation means. Others—notably groups like ITS—

have used it to justify more direct action to curb what they see as

the spread of a technological blight on humanity. And a surprising

number of scholars have tried to tease out socially relevant insights

on technology and its place within society from the manifesto.

The result is an essay that some people find easy to read selectively,

cherry-picking the passages that confirm their own beliefs and

ideas, while conveniently ignoring others. Yet, taken as a whole,

Kaczynski’s manifesto is a poorly-informed rant against what

he refers to pejoratively as “leftists,” and a naïve justification for

reverting to a more primitive society where individuals had what he

believed was more agency over how they lived, even if this meant

living in poverty and disease.

Kurzweil’s passage shifted Joy’s focus of concern onto artificial

intelligence and intelligent machines. This was something that

resonated deeply with him. But, to his consternation, he discovered

that this passage was not, in fact, written by Kurzweil, but by the

Unabomber, and was merely quoted by Kurzweil.

Fortunately, despite Kaczynski, ITS, and fictitious groups like

RIFT, violent anti-technology activism in the real world continues

to be relatively rare. Yet the underlying concerns and ideologies

are not. Here, Bill Joy’s article in Wired provides a sobering

nexus between the futurist imaginings of Kurzweil and Drexler,

Kaczynski’s anti-technology-motivated murders, and the bombings

of ITS. Each of these are worlds apart in how they respond to new

technologies. But the underlying visions, fears, and motivations are

surprisingly similar.

In today’s world, most activists working toward more measured

and responsible approaches to technology innovation operate

within social norms and through established institutions. Indeed,

there is a large and growing community of scholars, entrepreneurs,

advocates, and even policy makers, who are sufficiently concerned

about the future impacts of technological innovation that they

are actively working within appropriate channels to bring about

change. Included here are cross-cutting initiatives like the Future

of Life Institute, which, as was discussed in chapter eight, worked

with experts from around the world to formulate the 2017 set of

principles for beneficial AI development. There are many other

examples of respected groups—as well as more shadowy and

anarchic ones, like the “hacktivist” organization Anonymous—that

are asking tough questions about the line between what we can

do, and what we should be doing, to ensure new technologies are

developed safely and responsibly. Yet the divide between legitimate

action and illegitimate action is not always easy to discern,

especially if the perceived future impacts of powerful technologies

could possibly lead to hundreds of millions of people being harmed

or killed. At what point do the stakes become so high around

powerful technologies that violent means justify the ends?

Here, Transcendence treads an intriguing path, as it leads viewers

on a journey from reacting to RIFT with abhorrence, to begrudging

acceptance. As cyber-Will’s powers grow, we’re sucked into RIFT’s

perspective that the risk to humanity is so great that only violent

and direct action can stop it. And so, Bree and her followers pivot in

the movie from being antagonists to heroes.

This is a seductive narrative. If, by allowing a specific technology

to emerge, we would be condemning millions to die, and many

more to be subjugated, how far would you go to stop it? I suspect

that a surprising number of people would harbor ideas of carrying

out seemingly unethical acts in the short term for the good of

Exponential Extrapolation

In 1965, Gordon Moore, one of Intel’s founders, observed that the

number of transistors being squeezed into integrated circuits was

doubling around every two years. He went on to predict—with some

accuracy—that this trend would continue for the next decade.

As it turned out, what came to be known as Moore’s Law continued

way past the 1970s, and is still going strong (although there are

indications that it may be beginning to falter). It was an early

example of exponential extrapolation being used to predict how the

future of a technology would evolve, and it’s one of the most oftcited case of exponential growth in technology innovation.

In contrast to linear growth, where outputs and capabilities increase

by a constant amount each year, exponential growth leads to them

multiplying rapidly. For instance, if a company produced a constant

one hundred widgets a year, after five years, it would have produced

five hundred widgets. But if it increased production exponentially,

by a hundred times each year, after five years, it would have

produced a hundred million widgets. In this way, exponential

trends can lead to massive advances over short periods of time. But

because they involve such large numbers, predictions of exponential

growth are dangerously sensitive to the assumptions that underlie

them. Yet, they are extremely beguiling when it comes to predicting

future technological breakthroughs.

Moore’s Law, it has to be said, has weathered the test of time

remarkably well, even when data that predates Moore is taken into

account. In the supporting material for his book The Singularity

is Near, Ray Kurzweil plotted out the calculations per second per

$1,000 of computing hardware—a convenient proxy for computer

power—extrapolating back to some of the earliest (non-digital)

computing engines of the early 1900s.[^139] Between 1900 and

1998, he showed a relatively consistent exponential increase in

calculations per second per $1,000, representing a twenty-trilliontimes increase in computing power over this period. Based on these

data, Kurzweil projected that it will be only a short time before we

future generations (and indeed, this is a topic we’ll come back to in

chapter eleven and the movie Inferno). But there’s a fatal flaw in this

way of thinking, and that’s the assumption that we can predict with

confidence what the future will bring.

are able to fully simulate the human brain using computers and

create superintelligent computers that will far surpass humans in

their capabilities. Yet, these predictions are misleading, because they

fall into the trap of assuming that past exponential growth predicts

similar growth rates in the future.

One major issue with extrapolating exponential growth into the

future is that it massively amplifies uncertainties in the data. Because

each small step in the future extrapolation involves incredibly large

numbers, it’s easy to be off by a factor of thousands or millions in

predictions. These may just look like small variations on plots like

those produced by Kurzweil and others, but in real life, they can

mean the difference between something happening in our lifetime

or a thousand years from now.

There is another, equally important risk in extrapolating exponential

trends, and it’s the harsh reality that exponential relationships never

go on forever. As compelling as they look on a computer screen

or the page of a book, such trends always come to an end at some

point, as some combination of factors interrupts them. If these

factors lie somewhere in the future, it’s incredibly hard to work out

where they will occur, and what their effects will be.

Of course, Moore’s Law seems to defy these limitations. It’s been

going strong for decades, and even though people have been

predicting for years that we’re about to reach its limit, it’s still

holding true. But there is a problem with this perspective. Moore’s

Law isn’t really a law, so much as a guide. Many years ago, the

semiconductor industry got together and decided to develop an

industry roadmap to guide the continuing growth of computing

power. They used Moore’s Law for this roadmap, and committed

themselves to investing in research and development that would

keep progress on track with Moore’s predictions.

What is impressive is that this strategy has worked. Moore’s Law

has become a self-fulfilling prophecy. Yet for the past sixty-plus

years, this progress has relied extensively on the same underlying

transistor technology, with the biggest advances involving making

smaller components and removing heat from them more efficiently.

Unfortunately, you can only make transistors so small before you hit

fundamental physical limits.

Because of this, Moore’s Law is beginning to run into difficulties.

What we don’t know is whether an alternative technology will

emerge that keeps the current trend in increasing computing power

Not surprisingly, perhaps, there are those who believe that new

technologies will keep the exponential growth in computing

power going to the point that processing power alone matches

that of the human brain. But exponential growth sadly never lasts.

To illustrate this, imagine a simple thought experiment involving

bacteria multiplying in a laboratory petri dish. Assume that, initially,

these bacteria divide and multiply every twenty minutes. If we

start with one bacterium, we’d have two after twenty minutes, four

after forty minutes, eight after an hour, and so on. Based on this

trend, if you asked someone to estimate how many bacteria you’d

have after a week, there’s a chance they’d do the math and tell

you you’d have five times ten to the power of 151 of them—that’s

five with 151 zeroes after it. This, after all, is what the exponential

growth predicts.

That’s a lot of bacteria. In fact, it’s an impossible amount; this many

bacteria would weigh many, many times more than the mass of the

entire universe. The prediction may be mathematically reasonable,

but it’s practically nonsensical. Why? Because, in a system with

limited resources and competing interests, something’s got to give at

some point.

In the case of the bacteria, their growth is limited by the size of the

dish they’re contained in, the amount of nutrients available, how a

growing population changes the conditions for growth, and many

other factors. The bacteria cannot outgrow their resources, and as

they reach their limits, the growth rate slows or, in extreme cases,

may even crash.

We find the same pattern of rapid growth followed by a tail-off (or

crash) in pretty much any system that, at some point, seems to show

exponential growth. The exponential bit is inevitably present for a

limited period of time only. And while exponential growth may go

on longer than expected, once you leave the realm of hard data, you

really are living on the edge of reality.

The upshot of this is that, while Kurzweil’s singularity may one

day become a reality, there’s a high chance that unforeseen events

are going to interfere with his exponential predictions, either

going. But, at the moment, it looks like we may be about to take

a bit of a breather from the past few decades’ growth. In other

words, the exponential trend of the past probably won’t be great at

predicting advances over the next decade or so.

scuppering the chances of something transformative happening, or

pushing it back hundreds or even thousands of years.

And this is the problem with the technologies we see emerging

in Transcendence. It’s not that they are necessarily impossible

(although some of them are, as they play fast and loose with what

are, as far as we know, immutable laws of physics). It’s that they

depend on exponential extrapolation that ignores the problems

of error amplification and resource constraints. This is a mere

inconvenience when it comes to science-fiction plot narratives—

why let reality get in the way of a good story? But it becomes

more serious when real-world decisions and actions are based on

similar speculation.

Make-Believe in the Age of the Singularity

In 2003, Britain’s Prince Charles made headlines by expressing his

concerns about the dangers of gray goo.[^140] Like Bill Joy, he’d become

caught up in Eric Drexler’s idea of self-replicating nanobots that

could end up destroying everything in their attempt to replicate

themselves. Prince Charles later backtracked, but not until after his

concerns had led to the UK’s Royal Society and Royal Academy of

Engineering launching a far-reaching study on the implications of

nanotechnology.[^141]

The popular image of nanobots as miniaturized, fully autonomous

robots is one of the zombies of the nanotechnology world. It’s an

image that just won’t die, despite having barely a thread of scientific

plausibility behind it. There’s something about the term “nanobot”

that journalists cannot resist using, and that university press offices

seem incapable of resisting in their attempts to make nanoscale

research seem sexy and futuristic. Even as I write this, a quick

Google search returns three pages of news articles mentioning

“nanobots” in the last month alone. Yet, despite the popular image’s

appeal, there is a world of difference between the technology seen

in Transcendence and what’s happening in labs now.

As an early popularizer of nanobots, Eric Drexler was inspired

by the biological world and the way in which organisms have

evolved to efficiently manufacture everything they need from the

atoms and molecules around them. To Drexler, many biological

molecules are simply highly efficient molecular machines that strip

materials apart atom by atom and reassemble them into ever more

complex structures. In many ways, he saw these as analogous to the

machines that humans had developed over the centuries—wheels,

cogs, engines, and even simple robots—but at a much, much smaller

scale. And he speculated that, once we have full mastery over how

to precisely build materials atom by atom, we could not only match

what nature has achieved, but surpass it, creating a new era of

technologies based on nanoscale engineering.

Part of Drexler’s speculation was that it should be possible to

create microscopically small self-replicating machines that are able

to disassemble the materials around them and use the constituent

atoms to build new materials, including replicas of themselves. This

would allow highly efficient, atomically precise manufacturing, and

“nanobots” that could make almost anything on demand out of what

they could scavenge from the surrounding environment.

Drexler’s ideas are the inspiration behind the nanobots seen in

Transcendence, where these microscopically small machines are

capable of building and rebuilding solar cells, support structures,

and even replacement limbs and organs, all out of the atoms,

molecules, and materials in their environment. While this is a vision

that sounds decidedly science fiction, it’s one that, on the surface,

looks like it should work. After all, it’s what nature does, and does

so well. We’re all made of atoms and molecules, and depend on

This is not to discredit the research that often underlies the use of

the buzzword. Scientists are making amazing strides on diseasebusting particles that can be biologically “programmed” to seek out

and destroy cancer cells, or can be guided through the bloodstream

using magnets or ultrasonic waves. And there have been some

quite incredible breakthroughs in developing complex molecules—

including using DNA as a programmable molecular construction

set—that operate much like minuscule molecular machines.

These are all advances that have attracted the term “nanobot.”

And yet, there are night-and-day differences between the science

they represent and imagined scenarios of minute autonomous

robots swimming through our bodies, or swarming through the

environment. Yet the idea of nanobots as a future reality persists.

evolved biological machines that use and make DNA, proteins, cells,

nerves, bones, skin, and so on. And just like nature, where there’s a

constant battle between “good” biological machines (the molecular

machines that keep us healthy and well) and the “bad” ones (the

proteins, viruses and bacteria that threaten our health), Drexler’s

vision of molecular machines is one that also has its potential

downsides.

One scenario that Drexler explored was the possibility that a

poorly designed and programmed nanobot could end up having an

overriding goal of creating replicas of itself, potentially leading to a

runaway chain reaction. Drexler speculated that, if these nanobots

were designed to use carbon as their basic building blocks, they

would only stop replicating when every last atom of carbon in the

world had been turned into a nanobot. As we’re all made of carbon,

this would be a problem.

This is the “gray goo” scenario, and it’s what prompted both

Bill Joy and Prince Charles to raise the alarm over the risks of

nanotechnology. And yet, despite their concerns and those of others,

it is a highly improbable scenario.

In order to work, these rogue nanobots would need some source

of power. Like we find in biology, this would most likely come

from chemical reactions, the heat they could scavenge from

their surroundings, heat directly from the sun, or (most likely) a

combination of all three. But to scavenge energy, the nanobots

would need to be pretty sophisticated. And to maintain and replicate

this sophistication, they would need an equally sophisticated diet

that would depend on more than carbon alone.

In addition to this, because there would be replication errors

and nanobot malfunctions, these nanomachines would need

to be programmed with the ability to repair themselves. This

in turn would require additional energy demands and levels

of sophistication. Even with a high level of sophistication,

random errors would most likely lead to generations of bots

that either petered out because they weren’t perfect, or started

to behave differently from the previous generation (much like

biological mutation).

And this leads to a third challenge. At some point, the nanobots

would find themselves hitting the limits of being able to replicate

exponentially. This might be due to an accumulation of replication

errors, or increasing competition with mutant nanobots. Or it

The chance of nanobots overcoming all three of these challenges

and creating a gray goo scenario are infinitesimally small. This is, in

part, because the chances of something else happening to scupper

their plans of world domination are overwhelmingly large. And

we know this because we have a wonderful example of a selfreplicating system to study: life on Earth.

DNA-based life is, in many ways, the perfect example of Drexler’s

molecular machines. It shows us what is possible, but it also

indicates rather strongly what is not, as well as demonstrating

what is necessary to create a sustainable system. We know from

studying the natural world that sustainability depends on diversity

and adaptability, two characteristics that are notably absent in the

gray goo scenario. We also know that sustainable systems based on

evolved molecular machines are incredibly complex, so complex,

in fact, that they are light-years away from what we are currently

capable of designing and manufacturing.

In effect, for a Drexler-type form of nanotechnology to emerge,

we would have to invent an alternative form of biology, one that

is most likely as complex as the biology we are all familiar with.

This may one day be possible. But at the moment, we are about

as far from doing this as the Neanderthals were from inventing

quantum computing.

Yet here’s the rub. Even though self-replicating nanobots and gray

goo lie for now in the realm of fantasy, this hasn’t stopped the idea

from having an impact on the decisions people make, including the

decision of ITC to attempt to murder a number of nanotechnologists.

This is where technological speculation gets serious in a bad way.

It’s one thing to speculate about what the future of tech might look

like. But it’s another thing entirely when make-believe is treated

as plausible reality, and this, in turn, leads to actions that end up

harming people.

Techno-terrorism is an extreme case, and thankfully a rare one—at

the moment, at least. But there are many more layers of decisionmaking that can lead to people and the environment being

harmed if science fantasy is mistaken for science fact. If policies

and regulations, for instance, are based on improbable scenarios,

could be brought about by a scarcity of physical space, or energy,

or raw materials. However it happened, a point would be reached

where the population of nanobots either became unsustainable and

crashed, or reached equilibrium with its surroundings.

or a lack of understanding of what a technology can and cannot

do, people are likely to suffer unnecessarily. Similarly, if advocacy

groups block technologies because of what they imagine their

impacts will be, but they are working with implausible or impossible

scenarios, people’s lives will be unnecessarily impacted. And if

investors and consumers avoid certain technologies because they’ve

bought into a narrative that belongs more in science fiction than

science reality, potentially beneficial technologies may never see the

light of day.

Of course, all new technologies come with risks and challenges,

and it’s important that, as a society, we work together on addressing

these as we think about the technological futures we want to build.

In some cases, the consensus may be that there are some routes

that we are not ready for yet. But what a tragedy it would be if we

turned away from some technological futures that could transform

lives for the better, simply because we become confused between

reality and make-believe.

Here, Transcendence definitely lives in the world of make-believe,

especially when it comes to the vision of nanotechnology that’s

woven into the movie’s narrative. And this is fine, as long as we’re

aware of it. But as soon as we start to believe our own fantasies, we

have a problem.

Thankfully, not every science fiction movie is quite as rooted in

fantasy as Transcendence. As we’ll see next with the movie The Man

in the White Suit, some provide surprisingly deep insights into the

reality of cutting-edge science and emerging technologies—including

the realities of modern-day nanotechnology.

[^120]: Ray Kurzweil (2005) “The Singularity Is Near: When Humans Transcend Biology.” Published by Penguin Books.

[^121]: To accompany the book, “The Singularity is Near,” Kurzweil published a wonderful series of plots showing evidence for exponential growth in different areas of technology innovation. You can explore them all at http://www.singularity.com/charts/page159.html

[^122]: I’ve tried not to be too critical of the science in the movies in this book, but in this case, I can’t help wondering how cyber-Will’s nanobots also managed to retrain the person’s neurological networks to make sense of the new signals coming from his eyes. Or, for that matter, how they managed to sort out the cognitive and psychological trauma the person would face as their eyes were rewired.

[^123]: Working in emerging technologies, it sometimes seems that every new wave of innovation represents a new “industrial revolution” to someone. Yet, even though not everyone agrees with the World Economic Forum’s terminology, there is some merit to thinking that we are in a unique period in our technological growth. As a primer on the Fourth Industrial Revolution, I’d recommend Klaus Schwab’s January 2016 article on the World Economic Forum website: “The Fourth Industrial Revolution: what it means, how to respond.” https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/. And if you want more, there’s always his 2017 book, “The Fourth Industrial Revolution,” published by Crown Business.

[^124]: Mihail C. Roco and William S. Bainbridge (2003) “Converging Technologies for Improving Human Performance. Nanotechnology, biotechnology, information technology and cognitive science.” Published by the World Technology Evaluation Center (WTEC) https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/bioecon-%28%23%20023SUPP%29%20NSF-NBIC.pdf

[^125]: Drew Endy (2005). “Foundations for engineering biology.” Nature 438. http://doi.org/10.1038/nature04342

[^126]: For a comprehensive history of the emergence of synthetic biology, going back to the 1960s, it’s worth reading Ewen Cameron, Caleb Bashor, and James Collins’ account in the journal Nature Reviews: Cameron, D. E., et al. (2014). “A brief history of synthetic biology.” Nature Reviews Microbiology 12: 381. http://doi.org/10.1038/nrmicro3239

[^127]: iGEM began in 2003, with the first competition being held in 2004. That first year, there were five teams competing. By 2017, there were 310 teams, with representatives from more than forty countries. You can read more about iGEM and the projects that past teams have worked on at http://igem.org/

[^128]: The articles were published as a collection under the title “Technology innovation and life in the 21st century: Views from Civil Society,” and can be read at 2020 Science. https://2020science.org/2016/01/22/technology-innovation-and-life-in-the-21st-century-views-from-civil-society/

[^129]: Jim Thomas (2009) “21st Century Tech Governance? What would Ned Ludd do?” Published on 2020 Science, December 18, 2009. https://2020science.org/2009/12/18/thomas/

[^130]: See “If Elon Musk is a Luddite, count me in!” The Conversation, published December 23, 2015. https://theconversation.com/if-elon-musk-is-a-luddite-count-me-in-52630

[^131]: “Unabomber” derives from the FBI codename UNABOM, reflecting Kaczynski’s University and Airline BOMbing targets.

[^132]: FBI, February 12, 2002. Testimony of James F. Jarboe, Domestic Terrorism Section Chief, Counterterrorism Division, Federal Bureau of Investigation, before the House Resources Committee, Subcommittee on Forests and Forest Health, Washington, DC. https://archives.fbi.gov/archives/news/testimony/the-threat-of-eco-terrorism

[^133]: Coincidentally, there was an earlier “ELF,” in this case standing for Environmental Life Force, which was formed by John Clark Hanna in 1977 in Santa Cruz, California, as an “eco-guerrilla combat unit.” Hanna was arrested on November 22, 1977 and the original ELF disbanded in 1978.

[^134]: From The Anarchist Library: Communiques of ITS. https://theanarchistlibrary.org/library/ individualists-tending-toward-the-wild-communiques

[^135]: ITS members were not first to take an active dislike to nanotechnologists: In April 2010, three members of ELF were intercepted by Swiss police as they attempted to bomb a nanotechnology lab associated with IBM. To read more about this incident, I’d recommend Chris Toumey’s article in the journal Nature Nanotechnology: Toumey, C. (2013). “Anti-nanotech violence.” Nature Nanotechnology 8(10): 697-698. http://www.nature.com/nnano/journal/v8/n10/full/nnano.2013.201.html

[^136]: From The Anarchist Library: Communiques of ITS, Communique Eight (March 2014) https://theanarchistlibrary.org/library/individualists-tending-toward-the-wild-communiques#toc36

[^137]: Bill Joy (2000) “Why the future doesn’t need us.” Published in Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/

[^138]: “The Unabomber Trial: The Manifesto.” Published in 1995 in The Washington Post. http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm

[^139]: Kurzweil’s plot of the exponential growth of computing power can be accessed here: http://www.singularity.com/charts/page67.html

[^140]: As The Telegraph’s Roger Highfield wrote in June 2003. “Prince asks scientists to look into ‘grey goo’” (The Telegraph, June 5, 2003). http://www.telegraph.co.uk/news/science/science-news/3309198/Prince-asks-scientists-to-look-into-grey-goo.html

[^141]: The resulting study from the Royal Society and Royal Academy of Engineering became one of the most influential reports on nanotechnology risks to be published. It did not take the risk of gray goo seriously, stating “We have concluded that there is no evidence to suggest that mechanical self-replicating nanomachines will be developed in the foreseeable future.” Royal Society and Royal Academy of Engineering (2004) “Nanoscience and nanotechnologies: opportunities and uncertainties.” https://royalsociety.org/topics-policy/publications/2004/nanoscience-nanotechnologies/