In 2018, deepfakes were a curiosity — crude face-swaps that could fool the inattentive but not anyone looking closely. By 2025, AI-generated video, audio, and images have become so convincing that even experts struggle to tell them apart from reality. This is not just a technical achievement. It is a social earthquake.
The book explored deception and manipulation through films like Ex Machina and the broader theme of Deception, Manipulation, and Convenient Lies. What it could not have anticipated is the speed at which the tools of deception would become universally accessible.
Generative adversarial networks (GANs) were the initial engine, but diffusion models and transformer-based architectures have now made it possible to generate photorealistic images, video, and audio from text descriptions alone. A person's voice can be cloned from a few seconds of audio. A person's likeness can be placed in any scenario. Entire videos of events that never happened can be produced on a laptop in minutes.
The detection side is losing the arms race. Several approaches exist — the Coalition for Content Provenance and Authenticity (C2PA) embeds metadata in files to verify their origin, digital watermarking attempts to tag AI-generated content, and forensic analysis tools look for statistical signatures. But each approach has fundamental limitations. C2PA depends on voluntary adoption. Watermarks can be stripped. Forensic signatures become less reliable as generation models improve. The asymmetry is structural: generating convincing fakes is getting cheaper and easier while detecting them is getting harder and more expensive.
What makes this topic genuinely complex is that synthetic media is not inherently harmful. De-aging actors in films, voice synthesis for people who have lost the ability to speak, creative visual effects that once required Hollywood budgets — these are legitimate and often beneficial applications. Posthumous performances raise their own ethical questions, but they are not in the same category as election disinformation or non-consensual intimate imagery.
The challenge is that the same underlying technology serves all of these purposes, and there is no technical mechanism that reliably distinguishes creative use from weaponized use. This is a dual-use problem of a kind the book explores extensively through gain-of-function research and biosecurity, but applied to information rather than biology.
The deepest consequence is epistemological. "Seeing is believing" has been the default human heuristic for millennia. When photographic and video evidence can be fabricated at will, the foundation of shared reality erodes. This affects journalism, courts of law, elections, personal relationships, and the basic social trust that allows institutions to function.
Perhaps most insidiously, the existence of deepfakes creates what researchers call the "liar's dividend" — the ability of anyone caught on genuine video doing something wrong to claim the video is fake. The technology does not have to deceive everyone to be damaging. It just has to create enough doubt that certainty becomes impossible.
The book's framework for Science, Belief, and Ways of Knowing — explored through Contact and the tension between evidence and faith — becomes urgently practical when the nature of evidence itself is undermined. See also How do I know what's real anymore?