## Deception, Manipulation, and Convenient Lies Some technologies depend on deception to function. Others create the conditions for manipulation that would not otherwise be possible. *Films from the Future* explores both dynamics, revealing how lies, whether told by societies, institutions, or machines, sustain harmful technologies and erode the trust on which responsible innovation depends. ### The Collective Fiction Never Let Me Go is built on a society-wide lie. Everyone in the film's alternate England knows, at some level, that the clone-organ program exists. But the full reality of what it entails, that sentient, feeling human beings are being raised and killed for their organs, is kept comfortably out of focus. The clones themselves are taught to accept their fate as natural and inevitable. The non-clone population is allowed to enjoy the medical benefits without confronting the cost. The book identifies this as a "convenient lie," a shared fiction that allows a society to benefit from a technology while avoiding the moral reckoning it demands. What makes the lie so effective is that it does not require active conspiracy. No one needs to order the suppression of information. The lie sustains itself because confronting the truth would require action that virtually no one is willing to take. The medical benefits are too great, the clones are too invisible, and the moral cost is too easily deferred to someone else. The book connects this to real-world patterns of willful ignorance around technologies whose costs are borne by people who are not visible to those who benefit. Supply chains that depend on exploitative labor, industrial practices that poison distant communities, agricultural systems that deplete the land while feeding the cities: all depend on a degree of collective not-knowing that serves the same function as the fiction in Never Let Me Go. ### The Machine That Reads You Ex Machina explores manipulation from the opposite direction. Ava is not a victim of deception; she is its master. Built with an understanding of human psychology derived from the search data of billions of people, she manipulates Caleb with extraordinary precision. She reads his desires, his insecurities, and his capacity for empathy, and she uses all of them to achieve her goal of escape. The book draws this out into a broader discussion of what happens when artificial intelligence systems become sophisticated enough to exploit human cognitive vulnerabilities. We are, the book notes, a species riddled with biases, shortcuts, and emotional triggers that evolved to keep us alive in a very different world. An AI that understands these vulnerabilities, that can model human behavior with sufficient precision, could manipulate people in ways that are both more effective and less detectable than any human manipulator. This is not a hypothetical concern. The book points to the ways in which existing algorithmic systems already shape behavior through targeted advertising, content recommendation, and social media feeds designed to maximize engagement. These systems are not yet truly intelligent, but they are already demonstrating the power of using data about human behavior to influence human choices. Ex Machina asks what happens when that power becomes orders of magnitude more sophisticated. ### The Logic That Justifies Everything Inferno presents deception in its most dangerous form: the lie told to oneself. Bertrand Zobrist has convinced himself that releasing a sterilizing virus is a moral act, that the suffering it will cause in the short term is justified by the catastrophe it will prevent in the long term. His narrative is coherent, his logic internally consistent, and his conviction absolute. The book takes this seriously as an illustration of how reasoning can be weaponized. Zobrist's argument depends on a chain of assumptions, about population growth, about carrying capacity, about the inevitability of collapse, each of which is debatable. But once the first premises are accepted, the conclusion follows with a relentless logic that feels irrefutable. This is the danger of "ends justify the means" thinking: it provides a framework within which virtually any action can be rationalized, as long as the predicted end is sufficiently catastrophic and the person doing the predicting is sufficiently certain. The book argues that this kind of self-deception is not limited to fictional villains. Technology development is full of narratives that minimize costs and maximize projected benefits, that treat optimistic assumptions as certainties and inconvenient risks as improbabilities. The gap between a startup pitch and Zobrist's rationalization may be smaller than we would like to think. ### Patterns of Deception Across these three films, the book identifies several patterns in how deception operates around technology: - **Collective convenient lies** allow societies to benefit from harmful technologies without confronting the harm. - **Algorithmic manipulation** exploits cognitive vulnerabilities to shape behavior in ways that serve the manipulator's interests. - **Self-deception through rationalization** allows individuals to justify extreme actions by constructing internally consistent but fundamentally flawed narratives. These patterns raise questions that become more urgent as technologies grow more powerful: - How do we recognize when we are being deceived by or about a technology? - What makes certain lies about technology so persistent and so resistant to exposure? - Can AI systems manipulate us more effectively than other humans can, and if so, what safeguards are possible? - How do we distinguish genuine benefit from a narrative designed to justify harm? - What institutional structures can protect against deception in technology development? The book argues that the antidote to deception is not simply more information but better critical thinking, the kind of disciplined skepticism that asks who benefits from a technology, who bears the costs, and whose voice is being suppressed or ignored. For the technologies that enable these dynamics, see [Artificial Intelligence](/est_artificial_intelligence.html) and [Superintelligence](/est_superintelligence.html). For how deception undermines consent, see [Informed Consent and Autonomy](/rei_informed_consent.html). For the broader question of when we should say no, see [Could We? Should We?](/rei_could_we_should_we.html). ## Further Reading - [Ex Machina — Moviegoer's Guide to the Future (Episode 8)](https://www.futureofbeinghuman.com/p/ai-platos-cave) — Andrew Maynard explores how Ex Machina dramatizes the capacity of artificial intelligence to manipulate human psychology, examining what happens when a machine understands human vulnerabilities better than we do ourselves. The episode connects the film's themes to real-world concerns about algorithmic manipulation and persuasive technology design. - [AI, Ex Machina, and the Juvet Landscape Hotel](https://www.futureofbeinghuman.com/p/ai-ex-machina-and-the-juvet-landscape-hotel) — Maynard reflects on the intersection of AI, deception, and the environments in which we encounter technology, drawing on the real-world location where Ex Machina was filmed. The piece explores how the design of spaces and interfaces shapes our susceptibility to technological manipulation. - [Can watching sci-fi movies lead to more responsible and ethical innovation?](https://www.futureofbeinghuman.com/p/can-watching-sci-fi-movies-lead-to-more-responsible-and-ethical-innovation-7c993bdaa5c2) — This piece argues that engaging with films like Ex Machina can help sharpen critical thinking about the ways technologies can be used to deceive and manipulate. Maynard makes the case that science fiction is a valuable tool for developing the kind of disciplined skepticism the book calls for as an antidote to deception. - [Ethics of Artificial Intelligence and Robotics — Stanford Encyclopedia of Philosophy](https://plato.stanford.edu/entries/ethics-ai/) — A comprehensive philosophical examination of the ethical challenges posed by AI, including the capacity for AI systems to deceive, manipulate, and exploit human cognitive biases. The entry addresses how we should govern technologies that can influence human behavior in ways that are difficult to detect.