From Films from the Future: The Technology and Morality of Sci-Fi Movies by Andrew Maynard
“If there’s a flaw, it’s human—it always is.”
—Danny Witwer
There’s something quite enticing about the idea of predicting
how people will behave in a given situation. It’s what lies beneath
personality profiling and theories of preferred team roles. But it also
extends to trying to predict when people will behave badly, and
taking steps to prevent this.
In this vein, I recently received an email promoting a free online
test that claims to use “‘Minority Report-like’ tech to find out if you
are ‘predisposed’ to negative or bad behavior.” The technology I
was being encouraged to check out was an online survey being
marketed by the company Veris Benchmark under the trademark
“Veris Prime.” It claimed that “for the first time ever,” users had
an “objective way to measure a prospective employee’s level of
trustworthiness.”
Veris’ test is an online survey which, when completed, provides
you (or your employer) with a “Trust Index.” If you have a Trust
Index of eighty to one hundred, you’re relatively trustworthy, but
below twenty or so, you’re definitely in danger of showing felonious
tendencies. At the time of writing, the company’s website indicates
that the Trust Index is based on research on a wide spectrum of
people, although the initial data that led to the test came from 117
white-collar felons. In other words, when the test was conceived, it
was assumed that answering a survey in the same way as a bunch
of convicted felons is a good way of indicating if you are likely to
pursue equally felonious behavior in the future.
Naturally, I took the test. I got a Trust Index of nineteen. This
came with a warning that I’m likely to regularly surrender to the
temptation of short-term personal gain, including cutting corners,
stretching the truth, and failing to consider the consequences of
my actions.
Sad to say, I don’t think I have a great track record of any of these
traits; the test got it wrong (although you’ll have to trust me on
this). But just to be sure that I wasn’t an outlier, I asked a few of
my colleagues to also take the survey. Amazingly, it turns out that
academics are some of the most felonious people around, according
to the test. In fact, if the Veris Prime results are to believed, real
white-collar felons have some serious competition on their hands
from within the academic community. One of my colleagues even
managed to get a Trust Index of two.
One of the many issues with the Veris Prime test is the training
set it uses. It seems that many of the traits that are apparently
associated with convicted white-collar criminals—at least according
to the test—are rather similar to those that characterize curious,
independent, and personally-motivated academics. It’s errors like
this that can easily lead us into dangerous territory when it comes
to attempting to use technology to predict what someone will do.
But even before this, there are tough questions around the extent to
which we should even be attempting to use science and technology
to predict and prevent criminal behavior. And this leads us neatly
into the movie Minority Report.
Minority Report is based on the Philip K. Dick short story of the
same name, published in 1956. The movie centers on a six-year
crime prevention program in Washington, DC, that predicts murders
before they occur, and leads to the arrest and incarceration of
“murderers” before they can commit their alleged future crime. The
“Precrime” program, as it’s aptly called, is so successful that it has
all but eliminated murder in the US capital. And as the movie opens,
there’s a ballot on the books to take it nationwide.
The Precrime program in the movie is astoundingly successful—at
least on the surface. The program is led by Chief John Anderton
(played by Tom Cruise). Anderton’s son was abducted six years
previously while in his care, and was never found. The abduction
destroyed Anderton’s personal life, leaving him estranged from
his partner, absorbed in self-pity, and dependent on illegal street
The technology behind Precrime in the movie is fanciful, but there’s
a level of internal consistency that helps it work effectively within
the narrative. The program depends on three “precogs”: genetically
modified, isolated, and heavily sedated humans who have the
ability to foresee future murders. By monitoring and visualizing
their neural activity, the Precrime team can see snatches of the
precogs’ thoughts, and use these to piece together where and when
a future murder will occur. All they then have to do is swoop in
and arrest the pre-perpetrator before they’ve committed the crime.
And, because the precogs’ predictions are trusted, those arrested are
sentenced and incarcerated without trial. This incarceration involves
being fitted with a “halo”—a neural device that plunges the wearer
helplessly into their own incapacitating inner world, although
whether this is a personal heaven or hell we don’t know.
As the movie opens, we’re led to believe that this breakthrough
in crime prevention is a major step forward for society. Murder’s a
thing of the past in the country’s capital, its citizens feel safer, and
those with murderous tendencies are locked away before they can
do any harm. That is, until Chief Anderton is tagged as a pre-perp by
the precogs.
Not surprisingly, Anderton doesn’t believe them. He knows he
isn’t a murderer, and so he sets out to discover where the flaw
in the system is. And, in doing so, he begins to uncover evidence
that there’s something rotten in the very program he’s been
championing. On his journey, he learns that the precogs are not,
as is widely claimed, infallible. Sometimes one of them sees a
different sequence of events in the future, a minority report,
that is conveniently scrubbed from the records in favor of the
majority perspective.
Believing that his minority report—the account that shows he’s
innocent of a future murder—is still buried in the mind of the most
powerful precog, Agatha (played by Samantha Morton), he breaks
into Precrime and abducts her. In order to extract the presumed
minority report she’s carrying, he takes her to a seedy pleasure joint
that uses recreational brain-computer interfaces to have her mind
narcotics. Yet despite his personal pain, he’s a man driven to
ensuring others don’t have to suffer a similar fate. Because of
this, he is deeply invested in the Precrime program, and since its
inception has worked closely with the program director and founder
Lamar Burgess (Max von Sydow) to ensure its success.
“read.” And he discovers, to his horror, that there is no minority
report; all three precogs saw him committing the same murder in
the near future.
Anderton does, however, come across an anomaly: a minority report
embedded in Agatha’s memory of a murder that is connected with
an earlier inconsistency he discovered in the Precrime records.
Still convinced that he’s not a murderer, Anderton sets about
tracking down his alleged victim in order to prove his innocence,
taking Agatha with him.[^32] He traces the victim to a hotel, and on
entering his room, Anderton discovers the bed littered with photos
of the man with young children, including his son. Suddenly it
all fits into place. The trail has led Anderton to the one person
he would kill without hesitation if he got the chance. Yet, even as
Anderton draws his gun on his son’s abductor, Agatha pleads with
him to reconsider. Despite her precognition, she tries to convince
him that that the future isn’t set, and that he has the ability to
change it. And so Anderton overcomes his desire for revenge and
lowers his weapon.
It turns out Anderton was being set up. The victim—who wasn’t
Anderton’s son’s abductor—was promised a substantial payout for
his family if he convinced Anderton to kill him. When Anderton
refuses, the victim grabs the gun in Anderton’s hand, presses it
against himself, and pulls the trigger. As predicted, Anderton is
identified as the killer, and is arrested, fitted with a halo, and put
away.
With Anderton’s arrest, though, a darker undercurrent of events
begins to emerge around the precog program. It turns out that
Lamar Burgess, the program’s creator, has a secret that Anderton
was in danger of discovering—an inconvenient truth that, to Lamar,
stood in the way of what he believed was a greater social good. And
so, to protect himself and the program, Lamar finds a way to use the
precogs to silence Anderton.
As the hidden story behind the precog program is revealed, we
discover that Agatha was born to a junkie mother, and suffered from
being a terminally ill addict from birth. Agatha and other addictbabies became part of an ethically dubious experimental program
using advanced genetic engineering to search for a cure. In this
Lamar couldn’t allow Agatha’s mother to threaten his plans, so he
arranged an intricate ruse to dispose of her. Knowing that if he
attempted to murder her, the precogs would predict it, Lamar paid
a contract killer to murder Agatha’s mother. As anticipated, this was
predicted and prevented by Precrime. But as soon as the killer-tobe had been hauled off, Lamar re-enacted the planned murder, this
time succeeding.
Because Lamar’s act was so close to the attempted murder, images
of his actions from the precogs were assumed to be part of the
thwarted killing. And because Agatha’s precognition wasn’t quite
in step with the two other precogs, it was treated as a minority
report. In this way, using the system he’d created to bring an end to
murder, Lamar pulled off the perfect murder—or so he thought. But
as Anderton got closer to realizing that Lamar had staged Agatha’s
mother’s murder, Lamar realized that, in order to protect Precrime,
he also needed to be eliminated. And he would have succeeded,
had Anderton’s estranged partner not put two and two together, and
freed Anderton from his halo-induced purgatory.
Things come to a head in the movie as Anderton publicly broadcasts
Agatha’s minority report of Lamar killing her mother. In doing so, he
presents Lamar with a seemingly-impossible choice: kill Anderton
(as the precogs are predicting) and validate the program, but be
put away for life in the process; or don’t kill him, and in doing so,
demonstrate a fatal flaw in the program that will result in it being
terminated.
In the end, Burgess opts for a third option and kills himself. In
doing so, he saves Anderton, but still reveals a flaw in the system
that had predicted Anderton’s murder at his hand. As a result,
Precrime is dismantled, and the precogs are allowed to live as full a
life as is possible.
program, it’s discovered that, in Agatha’s case, a side effect of the
experiments is an uncanny ability to predict future murders. Given
their serendipitous powers, Agatha and two other subjects were
sedated, sequestered away, wired up, and plugged into to what was
to become the precog program. But Agatha’s mother cleaned herself
up and demanded her daughter back, threatening the very core of
this emerging technology.
Minority Report is a fast-paced, crowd-pleasing, action sci-fi thriller
of the caliber you’d expect from its director Stephen Spielberg.
But it also raises tough questions around preemptive action based
on predictive criminal behavior, as well as predestination, human
dignity, and the dangers of being sucked in by seemingly beneficial
technologies. It presents us with a world where technology has
seemingly made people’s lives safer, but at a terrible cost that
isn’t immediately obvious. And it shines a searing spotlight on the
question of “should we” when faced with a seductive technology
that ultimately threatens to place society in moral jeopardy.
In March 2017, the British newspaper The Guardian ran an online
story with the headline “Brain scans can spot criminals, scientists
say.”[^33] Unlike in Minority Report, the scanning was carried out using
a hefty functional magnetic resonance imaging (fMRI) machine,
rather than genetically altered precogs. But the story seemed to
suggest that scientists were getting closer to spotting criminal intent
before a crime had been committed, using sophisticated real-time
brain imaging.
In this case, the headline vastly overstepped the mark. The
original research used fMRI to see if brain activity could be used
to distinguish knowingly criminal behavior from merely reckless
behavior.[^34] It did this by setting up a somewhat complex situation,
where volunteers were asked to take a suitcase containing
something valuable through a security checkpoint while undergoing
a brain scan. But to make things more interesting (and scientifically
useful), their actions and choices came with financial rewards
and consequences.
Each participant was first given $6,000 in “play money.” They
were then presented with one to five suitcases, just one of which
contained the thing of value. If they decided not to carry anything
through the checkpoint, they lost $1,500. If they decided to carry a
suitcase, it cost them $500. And if they dithered about it, they were
docked $2,500.
The point of this rather elaborate setup was that there were
financial gains (at least with the fake money being used) involved
with the choices made, and the implication that carrying a suitcase
stuffed with valuable goods was dangerous (you could be fined if
discovered carrying), but financially lucrative if you got away with it.
To mix things up further, some participants only had the choice
of carrying the loaded suitcase (thus possibly getting $8,000), or
declining to take part in such a dodgy deal and walking away
with just $2,000. The participants who took a chance here were
knowingly participating in questionable behavior. For the rest, it was
a lottery whether they picked the loaded suitcase or not, meaning
that their actions veered toward being more reckless, and less
intentional. By simultaneously studying behavior and brain activity,
the researchers were able to predict what state the participants
were in—whether they were intentionally setting out to engage in
behavior that maybe wasn’t legitimate, or whether they were just
feeling reckless.
The long and short of this was that the study suggested brain
activity could be used to indicate criminal intent, and this is what
threw headline writers into a clickbait frenzy. But the research was
far from conclusive. In fact, the authors explicitly stated that “it
would be absurd to suggest, in light of our results, that the task of
assessing the mental state of a defendant could or should, even in
principle, be reduced to the classification of brain data.” They also
pointed out that, even if these results could be used to predict the
mental state of a person while committing a crime, they’d have to be
inside an fMRI scanner at the time, which would be tricky.
Despite the impracticality of using this research to assess the
mental state of people during the act of committing a crime, media
stories around the study tapped into a deep-seated fascination with
predicting criminal tendencies or intent—much as Veris Prime’s
Truth Index does. Yet this is not a new fascination, and neither is the
use of science to justify its indulgence.
Having selected a suitcase, if they chose the one with the valuable
stuff inside and they weren’t searched by security, they got an
additional $2,500—jackpot! But if they were searched and found
to be carrying, they were fined $3,500, leaving them with a mere
$2,000. On the other hand, if they weren’t carrying, they suffered no
penalties, whether they were searched or not.
In the seventeenth century, a very different “science” of predicting
criminal tendencies was all the rage: phrenology. Phrenology was
an attempt to predict someone’s character and behavior by the
shape of their skull. As understanding around how the brain works
developed, the practice became increasingly discredited. Sadly,
though, it laid a foundation for assumptions that traits which appear
to be common to people of “poor character” are also predictive
of their behavior—a classic case of correlation erroneously being
confused with causation. And it foreshadowed research that
continues to this day to connect what someone looks like with how
they might act.
Despite its roots in pseudoscience, the ideas coming out of
phrenology were picked up by the nineteenth-century criminologist
Cesare Lombroso. Lombroso was convinced that physical traits
such as jaw size, forehead slope, and ear size were associated with
criminal tendencies. His theory was that these and other traits were
throwbacks to earlier evolutionary ancestors, and that they indicated
an innate tendency toward criminal behavior.
It’s not hard to see how attractive these ideas might have been to
some, as they suggested criminals could be identified and dealt
with before breaking the law. With hindsight, it’s easy to see
how misguided and malevolent they were, but at the time, many
people bought into them. It would be nice to think that this way
of thinking about criminal tendencies was a short and salutary
aberration in humanity’s history. Sadly, though, it paved the way to
even more divisive forms of pseudoscience-based discrimination,
including eugenics.
In the 1900s, discrimination that was purportedly based on scientific
evidence shifted toward the idea that the quality or “worth”
of a person is based on their genetic heritage. The “science” of
eugenics—and sadly this is something that many scientists at the
time supported—suggested that our genetic heritage determines
everything about us, including our moral character and our social
acceptability. It was a deeply flawed concept that, nevertheless,
came with the same seductive idea that, if we know what makes
people “bad,” we can remove them from society before they cause
a problem. What is heartbreaking is that these ideas coming from
academics and scientists gained political momentum, and ultimately
became part of the justification for the murder of six million Jews,
and many others besides, in the Holocaust.
In 2011, three researchers published a paper suggesting that you
can tell a criminal from someone who isn’t (and, presumably by
inference, someone who is likely to engage in criminal activities)
by what they look like.
These days, I’d like to think we’re more enlightened, and that we
don’t fall prey so easily to using scientific flights of fancy to justify
how we treat others. Unfortunately, this doesn’t seem to be the case.
The assumption that someone’s behavioral tendencies can be
predicted from no more than what they look like, or how their brain
functions, is a slippery slope. It assumes—dangerously so—that
behavior is governed by genetic heritage and upbringing. But it also
opens the door to a better-safe-than-sorry attitude to law and order
that considers it better to restrain someone who might demonstrate
socially undesirable behavior than to presume them innocent until
proven guilty. And it’s an attitude that takes us down a path where
we assume that other people do not have agency over their destiny.
There is an implicit assumption here that how we behave can be
separated out into “good” and “bad,” and that there is consensus on
what constitutes these. But this is a deeply flawed assumption.
What the behavioral research above is actually looking at is
someone’s tendency to break or bend agreed-on rules of socially
acceptable conduct, as these are codified in law. These laws are not
an absolute indicator of good or bad behavior. Rather, they are a
result of how we operate collectively as a social species. In technical
terms, they establish normative expectations of behavior, which
simply means that most people comply with them, irrespective of
whether they have moral or ethical value. For instance, in most
cultures, it’s accepted that killing someone should be punished,
unless it’s in the context of a legally sanctioned war or execution
(although many societies would still consider this morally
reprehensible). This is a deeply embedded norm, and most people
would consider it to be a good guide of appropriate behavior. The
same cannot be said of “norms” surrounding homosexual acts,
though, which were illegal in the United Kingdom until 1967,
and are still illegal in some countries around the world, or others
surrounding LGBTQ rights, or even women’s rights.
When social norms are embedded within criminal law, it may
be possible to use physical features or other means to identify
“criminals” or those likely to be involved in “criminal” behavior. But
are we as a society really prepared to take preemptive action against
people who we arbitrarily label as “bad”? I sincerely hope not. And
here we get to the crux of the ethical and moral challenges around
predicting criminal intent. Even if we can predict tendencies from
images alone—and I am highly skeptical that we can gain anything
of value here that isn’t heavily influenced by researcher bias and
social norms—should we? Is it really appropriate to be asking if
we can predict, simply from how someone looks, whether they are
likely to behave in a way that we think is appropriate or not? And is
Using facial features to predict tendencies puts us way down the
slippery slope toward discriminating against people because they
are different from us. Thankfully, this is an idea that many would
dismiss as inappropriate these days. But, worryingly, our interest in
relating brain activity to behavioral traits—the high-tech version of
“looks like a criminal”—puts us on the same slippery slope.
Unlike photos, functional Magnetic Resonance Imaging allows
researchers to directly monitor brain activity, and to do it in real
time. It works by monitoring blood flow to different parts of the
brain, and using this to pinpoint which parts of someone’s brain are
active at any one point in time.
One of the beauties of fMRI is that it can map out brain activity
as people are thinking about and processing the world around
them. For instance, it can show which parts of a subject’s brain are
triggered if they’re shown a photo of a donut, if they are happy, or
sad, or angry, or what their brain activity looks like if they’re given
the opportunity to take a risk.
fMRI has opened up a fascinating window into how we think about
and respond to our surroundings, and in some cases, what we think.
And it’s led to some startling revelations. We now know, for instance,
that we often unconsciously decide what we’re going to do several
seconds before we’re actually aware of making a decision.[^37] Recent
research has even indicated that high-resolution fMRI scans on
primates can be used to decode what the animals are seeing.[^38] The
researchers were, quite literally, reading these primates’ minds.
This is quite incredible science. And not surprisingly, it’s leading to
a revolution in understanding how our brains operate. This includes
developing a better understanding of how certain brain behaviors
it ethical to generate data that could be used to discriminate against
people based on their appearance?
can lead to debilitating medical conditions. It’s also leading to a
deeper understanding of how the mechanics of our brain determine
who we are, and how we behave.
That said, there’s still considerable skepticism around how effective
a tool fMRI is and how robust some of its findings are. It’s also fair
to say that some of these findings challenge deeply held beliefs
about many of the things we hold dear, including the nature of free
will, moral choice, kindness, compassion, and empathy. These are
all aspects of ourselves that help define who we are as a person.
Yet, with the advent of fMRI and other neuroscience-based tools, it
sometimes feels like we’re teetering on the precipice of realizing that
who we think we are—our sense of self, or our “soul” if you like—is
merely an illusion of our biology.
This in itself raises questions over the degree to which neuroscience
is racing ahead of our ability to cope with what it reveals. Yet the
reality is that this science is progressing at breakneck speed, and
that fMRI is allowing us to dive ever deeper behind our outward
selves—our facial features and our easily observed behaviors—and
into the very fabric of the organ that plays such a role in defining
us. And, just like phrenology and eugenics before it, it’s opening
up the temptation to interpret how our brains operate as a way to
predict what sort of person we are, and what we might do.
In 2010, researchers provided a group of subjects with advice on
the importance of using sunscreen every day. At the same time,
the subjects’ brain activity was monitored using fMRI. It’s just one
of many studies that are increasingly trying to use real-time brain
activity monitoring to predict behavior.
In the sunscreen study, the subjects were asked how likely they
were to take the advice they were given. A week later, researchers
checked in with them to see how they’d done. Using the fMRI scans,
the researchers were able to predict which subjects were going to
use sunscreen and which were not. But more importantly, using
the scans, the researchers discovered they were better at predicting
how the subjects would behave than they themselves were. In other
words, the researchers knew their subjects’ minds better than they
did.[^39]
But even if predicting behavior based on what we can measure is
potentially possible, is this a responsible direction to be heading in?
The problem is, just as with research that tries to tie facial features,
head shape, or genetic heritage to a propensity to engage in criminal
behavior, fMRI research is equally susceptible to human biases.
It’s not so much that we can collect data on brain activity that’s
problematic; it’s how we decide what data to collect, and how we
end up interpreting and using it, that’s the issue.
A large part of the challenge here is understanding what the
motivation is behind the research questions being asked, and
what subtle underlying assumptions are nudging a complex
series of scientific decisions toward results that seem to support
these assumptions.
Here, there’s a danger of being caught up in the misapprehension
that the scientific method is pure and unbiased, and that it’s solely
about the pursuit of truth. To be sure, science is indeed one of
the best tools we have to understand the reality of how the world
around us and within us works. And it is self-correcting—ultimately,
errors in scientific thinking cannot stand up to the scrutiny the
scientific method exposes them to. Yet this self-correcting nature
of science takes time, sometimes decades or centuries. And until
it self-corrects, science is deeply susceptible to human foibles,
as phrenology, eugenics, and other misguided ideas have all too
disturbingly shown.
This susceptibility to human bias is greatly amplified in areas where
the scientific evidence we have at our disposal is far from certain,
and where complex statistics are needed to tease out what we think
is useful information from the surrounding noise. And this is very
much the case with behavioral studies and fMRI research. Here,
limited studies on small numbers of people that are carried out
under constrained conditions can lead to data that seem to support
Research like this suggests that our behavior is determined by
measurable biological traits as much as by our free will, and it’s
pushing the boundaries of how we understand ourselves and how
we behave, both as individuals and as a society. And, while science
will never enable us to predict the future in the same way as
Minority Report’s precogs, it’s not too much of a stretch to imagine
that fMRI and similar techniques may one day be used to predict
the likelihood of someone engaging in antisocial and morally
questionable behavior.
new ideas. But we’re increasingly finding that many such studies
aren’t reproducible, or that they are not as generalizable as we at
first thought. As a result, even if a study does one day suggest that
a brain scan can tell if you’re likely to steal the office paper clips,
or murder your boss, the validity of the prediction is likely to be
extremely suspect, and certainly not one that has any place in
informing legal action—or any form of discriminatory action—before
any crime has been committed.
Just as in Minority Report, the science and speculation around
behavior prediction challenges our ideas of free will and justice.
Is it just to restrict and restrain people based on what someone’s
science predicts they might do? Probably not, because embedded
in the “science” are value judgments about what sort of behavior is
unwanted, and what sort of person might engage in such behavior.
More than this, though, the notion of pre-justice challenges the very
idea that we have some degree of control over our destiny. And this
in turn raises deep questions about determinism versus free will.
Can we, in principle, know enough to fully determine someone’s
actions and behavior ahead of time, or is there sufficient uncertainty
and unpredictability in the world to make free will and choice
valid ideas?
In Chapter Two and Jurassic Park, we were introduced to the
ideas of chaos and complexity, and these, it turns out, are just
as relevant here. Even before we have the science pinned down,
it’s likely that the complexities of the human mind, together with
the incredibly broad and often unusual panoply of things we all
experience, will make predicting what we do all but impossible.
As with Mandelbrot’s fractal, we will undoubtedly be able to draw
boundaries around more or less likely behaviors. But within these
boundaries, even with the most exhaustive measurements and the
most powerful computers, I doubt we will ever be able to predict
with absolute certainty what someone will do in the future. There
will always be an element of chance and choice that determines
our actions.
Despite this, the idea that we can predict whether someone is going
to behave in a way that we consider “good” or “bad” remains a
seductive one, and one that is increasingly being fed by technologies
that go beyond fMRI.
Their work hit a nerve for many people because it seemed to
reinforce the idea that criminal behavior is something that can be
predicted from measurable physiological traits. But more than this,
it suggested that a computer could be trained to read these traits
and classify people as criminal or non-criminal, even before they’ve
committed a crime.
The authors vehemently resisted suggestions that their work was
biased or inappropriate, and took pains to point out that others
were misinterpreting it. In fact, in their addendum, they point out,
“Nowhere in our paper advocated the use of our method as a tool of
law enforcement, nor did our discussions advance from correlation
to causality.”
Nevertheless, in the original paper, they conclude: “After controlled
for race, gender and age, the general law-biding [sic] public
have facial appearances that vary in a significantly lesser degree
than criminals.” It’s hard to interpret this as anything other than
a conclusion that machines and artificial intelligence could be
developed that distinguish between people who have criminal
tendencies and those who do not.
Part of why this is deeply disturbing is that it taps into the issue of
“algorithmic bias”—our ability to create artificial-intelligence-based
apps and machines that reflect the unconscious (and sometimes
conscious) biases of those who develop them. Because of this,
there’s a very real possibility that an artificial judge and jury that
relies only on what you look like will reflect the prejudices of its
human instructors.
This research is also disturbing because it takes us out of the
realm of people interpreting data that may or may not be linked
In 2016, two scientists released the results of a study in which they
used machine learning to train an algorithm to identify criminals
based on headshots alone.
to behavioral tendencies, and into the world of big data and
autonomous machines. Here, we begin to enter a space where we
have not only trained computers to do our thinking for us, but we
no longer know how they’re thinking. In a worrying twist of irony,
we are using our increasing understanding of how the human brain
works to develop and train artificial brains that we are increasingly
ignorant of the inner workings of.
In other words, if we’re not careful, in our rush to predict and
preempt undesirable human behavior, we may end up creating
machines that exhibit equally undesirable behavior, precisely
because they are unpredictable.
Despite being set in a technologically advanced future, one of the
more intriguing aspects of Minority Report is that it falls back on
human intuition when interpreting the precog data feed. In the
opening sequences, Chief Anderton performs an impromptu “ballet”
of preemptive deduction, as he turns up the music and weaves the
disjointed images being fed through from the three precogs into a
coherent narrative. This is a world where, perhaps ironically, given
the assumption that human behavior is predictable, intuition and
creativity still have an edge over machines.
Anderton’s professional skills tap into a deep belief that there’s more
to the human mind than its simply being the biological equivalent of
a digital computer—even a super-powerful one. As the movie opens,
Anderton is responsible for fitting together a puzzle of fragmented
information. And, as he aligns the pieces and fills the gaps, he draws
connections between snippets of information that seem irrelevant
or disjointed to the untrained eye, so much so that the skill he
demonstrates lies in the sum total of his experiences as a living
human being. This is adeptly illustrated as Anderton pins down the
location of an impending murder by recognizing inconsistencies in
two images that, he deduces, could only be due to a child riding an
old-fashioned merry-go-round.
This small intuitive leap is deeply comforting to us as viewers. It
confirms to that there’s something uniquely special about people,
and it suggests that we are more than the sum of the chemicals,
cells, and organs we’re made of. It also affirms a belief that we
cannot simply be defined by what we look like, or by the electrical
and chemical processes going on inside our head.
Questions like this would have been hypothetical, bordering the
fantastical, not so long ago. Certainly, as a species, we’ve toyed with
the idea for centuries that people are simply complex yet ultimately
predictable biological machines (chaos theory not withstanding).
But it’s only recently that we’ve had the computing power to start
capturing every minutia of ourselves and the world around us and
utilizing it in what’s increasingly called “big data.”
“Big data”—which when all’s said and done is just a fancy way of
saying massive amounts of information that we can do stuff with—
has its roots in human genome sequencing. Our genetic code has
three billion discrete pieces of information, or base pairs, that help
define us biologically. Compared to the storage capacity of early
computers, this is a stupendously large amount of information, far
more than could easily be handled by the computing systems of the
1970s and 1980s, or even the 1990s, when the initiative to decode
the complete human genome really took off. But, as we began to
understand the power of digital computing, scientists started to
speculate that, if we could decode the human genome and store it in
computer databases, we would have the key to the code of life.
With hindsight, they were wrong. As it turns out, decoding the
human genome is just one small step toward understanding how
we work. But this vision of identifying and cataloguing every piece
of our genome caught hold, and in the late 1990s it led to one of
the biggest sets of data ever created. It also spawned a whole new
area of technology involving how we collect, store, analyze, and use
massive amounts of data, and this is what is now known colloquially
as Big Data.
As we’ve since discovered, the ability to store three billion base
pairs of genetic code in computer databases barely puts us in the
foothills of understanding human biology. The more we find out,
the more complex we discover life is. But the idea that the natural
world can be broken down into its constituent parts, uploaded into
cyberspace, and played around with there remains a powerful one.
And there’s still a belief held by some that, if we have a big enough
computer memory and a powerful enough processor, we could in
But are we right in this belief that we are more than the sum of our
parts? What if we could be reduced to massive amounts of data that
not only determine who we are, but how we will act and react in
any given situation?
principle encode every aspect of the physical and biological world
and reproduce it virtually.
This is the idea behind movies like The Matrix (which sadly didn’t
make the cut for this book) where most people are unwittingly
playing out their lives inside a computer simulation. It also
underpins speculations that arise every now and again that we are
all, in fact, living inside a computer simulation, but just don’t know
it. There are even researchers working on the probability that this is
indeed the case.[^42]
This is an extreme scenario that comes out of our growing ability
to collect, process, and manipulate unimaginable amounts of data.
It’s also one that has some serious flaws, as our technology is rarely
as powerful as our imaginations would like it to be. Yet the data
revolution we’re currently living through is still poised to impact our
lives in quite profound ways, including our privacy.
Despite the Precrime program’s reliance on human skills and
intuition, Minority Report is set in a future where big data has
made privacy a thing of the past—almost. As John Anderton passes
through public spaces, he’s bombarded by personal ads as devices
identify him from his retinal scan. And, like a slick salesperson
who knows his every weakness, they tempt him to indulge in some
serious retail therapy.
These ads are a logical extension of what most of us already
experience with online advertisements. Websites are constantly
sucking up our browsing habits and trying to second-guess what
we might be tempted to purchase, or which sites we might be
persuaded to visit. These online ads are based on a sophisticated
combination of browsing history, personal data, and machine
learning. Powerful algorithms are being trained to collect our
information, watch our online habits, predict what we might be
interested in, and place ads in front of us that, they hope, will
nudge our behavior. And it’s not only purchases. Increasingly, online
behavior is being used to find ways of influencing what people
Admittedly, targeted online messaging is still clumsy, but it’s getting
smarter and subtler. Currently it’s largely driven by the massive
amounts of data that organizations are collecting on our browsing
habits. But imagine if these data extended to everything we did—
where we are, who we’re with, what we’re doing, even what we’re
saying. We’re frighteningly close to a world where some system
somewhere holds data on nearly every aspect of our lives, and the
only things preventing the widespread use of these “engines of
persuasion” are our collective scruples and privacy laws.
Minority Report is surprisingly prescient when it comes to some
aspects of big data. It paints a future where what people do
in the real world as well as online is collected, analyzed, and
ultimately used in ways that directly affect them. In the movie,
these massive repositories of personal data are not used to
determine if you’re going to commit a crime—this remains the
sacred domain of humans in John Anderton’s world—but they are
used to nudge people’s behavior toward what benefits others more
than themselves.
This is, of course, what marketing is all about. Marketers use
information to understand how they can persuade people to act in
a certain way, whether this is to purchase organic food, or to buy
a new car, or to vote for a particular political candidate. Big data
massively expands the possibilities for manipulation and persuasion.
And this is especially the case when it’s coupled to machine
learning, and the increasing ability of artificial-intelligence-based
systems to join the data dots, and even interpolate what’s missing
from the data they do have. Here, we’re no longer just talking about
how big data combined with smart algorithms can help identify
future criminals and curtail their antisocial tendencies, but about
how corporations, governments, and others can subtly influence
people’s behavior to do what they want. It’s a subtler and more
Machiavellian approach to achieving what is essentially the same
thing—controlling people.
Frighteningly, the world portrayed in Minority Report is not that
far away. We still lack the ability to identify people through simple
think and how they act—even down to how they vote. As I write
this, we’re still experiencing the fallout from Cambridge Analytica’s
manipulations of Facebook feeds that were designed to influence
users, and there’s growing concern over the use of fake news and
social media to influence people’s ideas and behaviors.
and ubiquitous scans, but we’re almost there. Real-time facial
recognition, for instance, is almost at the point where, if you’re
captured on camera, the chances are that someone has the capability
of identifying and tracking you. And our digital fingerprint—the
sum total of the digital breadcrumbs we scatter around us in our
daily lives—is becoming easier to follow, and harder to cover up. As
ubiquitous identity monitoring is increasingly matched to massive
data files on every single one of us, we’re going to have to make
some tough decisions over how much of our personal freedom
we are willing to concede for the benefits these new technologies
bring.[^43]
Even more worrying, perhaps, is the number of people who are
already conceding their personal freedom without even thinking
about it. How many of us use digital personal assistants like
Siri, Google Home, or Alexa, or rely on cloud-connected home
automation devices, or even internet-connected cars? And how many
of us read the small print in the user agreement before signing up
for the benefits these technologies provide? We are surrounded
by an increasing number of devices that are collecting personal
data on us and combining it in ever-growing databases. And while
we’re being wowed by the lifestyle advantages these bring, they’re
potentially setting us up to be manipulated in ways that are so
subtle, we won’t even know they’re happening. But the use of big
data doesn’t stop there.
In 2003, a group of entrepreneurs set up the company Palantir,
named after J. R. R. Tolkien’s seeing-stones in Lord of the Rings.
The company excels at using big data to detect, monitor, and
predict behavior, based on myriads of connections between what is
known about people and organizations, and what can be inferred
from the information that’s available. The company largely flew
under the radar for many years, working with other companies and
intelligence agencies to extract as much information as possible
out of massive data sets. But in recent years, Palantir’s use in
“predictive policing” has been attracting increasing attention. And in
May 2018, the grassroots organization Stop LAPD Spying Coalition
released a report raising concerns over the use of Palantir and other
Palantir is just one of an increasing number of data collection and
analytics technologies being used by law enforcement to manage
and reduce crime. In the US, much of this comes under the banner
of the “Smart Policing Initiative,” which is sponsored by the US
Bureau of Justice Assistance. Smart Policing aims to develop and
deploy “evidence-based, data-driven law enforcement tactics and
strategies that are effective, efficient, and economical.” It’s an
initiative that makes a lot of sense, as evidence-based and datadriven crime prevention is surely better than the alternatives. Yet
there’s growing concern that, without sufficient due diligence,
seemingly beneficial data and AI-based approaches to policing could
easily slip into profiling and “managing people” before they commit
a criminal act. Here, we’re replacing Minority Report’s precogs with
massive data sets and AI algorithms, but the intent is remarkably
similar: Use every ounce of technology we have to predict who
might commit a crime, and where and when, and intervene to
prevent the “bad” people causing harm.
Naturally, despite the benefits of data-driven crime prevention (and
they are many), irresponsible use of big data in policing opens
the door to unethical actions and manipulation, just as is seen in
Minority Report. Yet here, real life is perhaps taking us down an
even more worrying path.
One of the more prominent concerns raised around predictive
policing is the dangers of human bias swaying data collection and
analysis. If the designers of predictive policing systems believe they
know who the “bad people” are, or even if they have unconscious
biases that influence their perceptions, there’s a very real danger
that crime prevention technologies end up targeting groups and
neighborhoods that are assumed to have a higher tendency toward
criminal behavior. This was at the center of the Stop LAPD Spying
Coalition report, where there were fears that “black, brown, and
poor” communities were being disproportionately targeted, not
because they had a greater proportion of likely criminals, but
because the predictive systems had been trained to believe this. Just
like the Veris Prime test that the chapter started with, that’s designed
to predict white-collar criminal tendencies, there are real dangers
technologies by the Los Angeles Police Department for predicting
where crimes are likely to occur, and who might commit them.[^44]
that predictive policing systems will end up targeting people who
are assumed to have bad tendencies,whether they do or not.
The hope is, of course, that we learn to wield this tremendously
powerful technology responsibly and humanely because, without
a doubt, if it’s used wisely, big data could make our lives safer and
more secure. But this hope has to be tempered by our unfailing
ability to delude ourselves in the face of evidence to the contrary,
and to justify the unethical and the immoral in the service of an
assumed greater good.
And this is a theme that also echoes through our next movie:
Limitless.
[^32]: It has to be said that, had Anderton had his head screwed on, it might have occurred to him that tracking down the person he was allegedly going to murder to make sure he didn’t, in fact, murder him, wasn’t the smartest move in the book.
[^33]: Ian Sample (2017), “Brain scans can spot criminals, scientists say.” The Guardian. Published online March 13, 2017. https://www.theguardian.com/science/2017/mar/13/brain-scans-can-spot-criminals-scientists-say
[^34]: The original research was published in the Proceedings of the National Academies of Science. Vilares, I., et al. (2017). “Predicting the knowledge—recklessness distinction in the human brain.” Proceedings of the National Academy of Sciences 114(12): 3222-3227. http://doi.org/10.1073/pnas.1619385114
[^35]: In the study, thirty-six students in a psychology class (thirty-three women and three men) were shown mug shots of thirty-two Caucasian males. They were told that some were criminals, and they were asked to assess—from the photos alone—whether each person had committed a crime; whether they’d committed a violent crime; if it was a violent crime, whether it was rape or assault; and if it was non-violent, whether it was arson or a drug offense. Within the limitations of the study, the participants were more likely to correctly identify criminals than incorrectly identify them from the photos. Not surprisingly, perhaps, this led to a slew of headlines along the lines of “Criminals Look Different From Non-criminals” (this one from a blog post on Psychology Today). But despite this, the results of the study are hard to interpret with any degree of certainty. It’s not clear what biases may have been introduced, for instance, by having the photos evaluated by a mainly female group of psychology students, or by only using photos of white males, or even whether there was something associated with how the photos were selected and presented, and how the questions were asked, that influenced the results. The results did seem to indicate that, overall, the students were successful in identifying photos of convicted criminals in this particular context. But the study was so small, and so narrowly defined, that it’s hard to draw any clear conclusions from it. However, there is a larger issue at stake with this and similar studies, and this is the ethical issue with carrying out and publicizing the results of such research in the first place. Here, the very appropriateness of asking if we can predict criminal behavior brings us back to the earlier study on intent versus reckless behavior, and to the underlying premise in Minority Report.
[^36]: Satoshi Kanazawa (2011) “Criminals Look Different From Non-criminals.,” Psychology Today. Posted March 13, 2011. https://www.psychologytoday.com/blog/the-scientific-fundamentalist/201103/criminals-look-different-noncriminals
[^37]: In a 2008 study, researchers showed that fMRI scans of subjects’ brains indicated what decision they were going to make in a specific situation, some ten seconds before they actually made it. Eerily, this meant that the scientists knew what the subjects were going to do before they themselves realized. The research was published in the journal Nature Neuroscience. Soon, C. S., et al. (2008). “Unconscious determinants of free decisions in the human brain.” Nature Neuroscience 11: 543. http://doi.org/10.1038/nn.2112
[^38]: In this case the research—published in 2017 in the journal Cell—showed that facial images seen by macaque monkeys could be reconstructed by monitoring specific brain cells. Chang, L. and D. Y. Tsao (2017). “The Code for Facial Identity in the Primate Brain.” Cell 169(6): 1013-1028.e1014. http://doi.org/10.1016/j.cell.2017.05.011
[^39]: This study by Emily Faulk and colleagues was published in the Journal of Neuroscience. Falk, E. B., et al. (2010). “Predicting Persuasion-Induced Behavior Change from the Brain.” The Journal of Neuroscience 30(25): 8421. http://doi.org/10.1523/JNEUROSCI.0063-10.2010
[^40]: The study was highly contentious and resulted in a significant public and academic backlash, leading the paper’s authors to state in an addendum to the paper, “Our work is only intended for pure academic discussions; how it has become a media consumption is a total surprise to us.”
[^41]: Xiaolin Wu and Xi Zhang’s response to critics of their work can be read at https://arxiv.org/abs/1611.04135
[^42]: Beyond the cadre of science fiction writers who have dabbled with this idea over the years, the philosopher Nick Bostrom argued in a 2003 paper in Philosophical Quarterly that we are already living in a computer simulation (available at https://www.simulation-argument.com/simulation.pdf). This idea appeared to be debunked in 2017 by two researchers from Oxford University whose theoretical research suggested there is not enough matter in the universe to create a classical computer system capable of simulating it. What is even more interesting is that, despite their paper being nearimpenetrable to the vast majority of people on Earth, it still got a sizable amount of press coverage. You can read it—or attempt to—in the journal Science Advances. Ringel, Z. and D. L. Kovrizhin (2017). “Quantized gravitational responses, the sign problem, and quantum complexity.” Science Advances 3(9). http://doi.org/10.1126/sciadv.1701758
[^43]: In Europe, the recently-introduced General Data Protection Regulation, or GDPR, addresses some of these concerns as it sets out to protect the privacy of individuals in a data-rich society. But experts are skeptical as to the extent to which it can truly prevent massive amounts of data being collected and used against individuals.
[^44]: The report “Dismantling Predictive Policing in Los Angeles” was released on May 8, 2018, and garnered considerable press attention for its echoes of a Minority-Report-like approach to pre-crime. It’s accessible at https://stoplapdspying.org/wp-content/uploads/2018/05/Before-the-Bullet-Hits-the-Body-May-8-2018.pdf