How much of your freedom would you trade for safety? It is one of the oldest questions in political philosophy, but Films from the Future shows how emerging technologies are giving it new and unsettling dimensions. Through two films in particular, the book examines what happens when the infrastructure of watching, predicting, and controlling is built into the fabric of everyday life.
Minority Report imagines a world where murders can be predicted and prevented before they occur. The Precrime program in the film has virtually eliminated homicide in Washington, DC, and is on the verge of going nationwide. On the surface, it looks like one of the greatest advances in public safety ever achieved.
But the book digs beneath that surface. The Precrime system depends on three genetically modified humans, the precogs, who are sedated, sequestered, and wired into a monitoring apparatus that treats their consciousness as a tool. Those identified as future criminals are arrested and incarcerated without trial, sentenced on the basis of something they have not yet done and, the film eventually reveals, might never have done at all.
The book connects this to real-world developments in predictive policing and algorithmic risk assessment. It notes that companies are already marketing tools that claim to predict criminal behavior, and that the data sets and assumptions behind these tools carry all the biases of the societies that produced them. The author's own experience taking one such assessment, a "Trust Index" that classified him and his academic colleagues as potential felons, illustrates how easily these systems generate false positives when their training data is flawed.
More fundamentally, Minority Report raises the question of whether it is ever legitimate to punish someone for something they have not done. The film's Precrime system operates on the assumption that its predictions are infallible, but the existence of "minority reports," alternative futures seen by a dissenting precog, reveals that the system is built on a convenient lie. The book uses this to challenge the broader assumption that algorithmic prediction can ever be free of error or bias.
Ghost in the Shell adds another dimension to surveillance and control. In its future world, where cybernetic augmentation is widespread, being connected means being vulnerable. The film's characters inhabit bodies that can be hacked, memories that can be manipulated, and identities that can be stolen or overwritten.
The book draws this out into a discussion of what privacy means when the boundary between self and network dissolves. If your augmented body is connected to the internet, who has access to the data it generates? If your memories are stored digitally, who can alter them? Ghost in the Shell presents a world where the most intimate aspects of personhood, thought, perception, memory, become potential targets for those with the technical capability to exploit them.
This is not purely speculative. The book notes real-world developments in brain-computer interfaces and biometric data collection that are beginning to raise precisely these questions. As our devices and eventually our bodies become more deeply networked, the attack surface for surveillance and manipulation expands in ways that previous generations never had to contemplate.
Both films reveal that surveillance is never a neutral activity. It is always embedded in power relationships. In Minority Report, the system that watches for crime is controlled by people with their own interests and vulnerabilities, and when the program's founder uses it to cover up his own crime, the corruption at its heart is exposed. In Ghost in the Shell, the ability to hack augmented bodies is wielded by those with resources and technical sophistication against those who are vulnerable.
The book argues that any discussion of surveillance technology must grapple with this asymmetry. The question is not just whether algorithms can be accurate, but who controls them, who they are aimed at, and whose interests they serve. Historical precedent suggests that surveillance tools, no matter how well-intentioned, tend to be deployed most aggressively against marginalized communities.
The book does not argue that surveillance technologies are inherently wrong. It recognizes that there are legitimate uses for predictive analytics and data-driven decision-making. But it insists that the safeguards, the transparency, and the accountability mechanisms must be at least as sophisticated as the technologies themselves. Without them, we risk building a world where the infrastructure of control is so deeply embedded that opting out is no longer possible.
For the technologies behind these concerns, see Predictive Algorithms, Ubiquitous Surveillance, and Brain-Computer Interfaces. For how these issues connect to individual rights, see Informed Consent and Autonomy.