A video shows a politician saying something inflammatory. A photo shows a celebrity in a compromising situation. An audio recording captures a CEO admitting fraud. Any of these could be real. Any of them could be fabricated in minutes using freely available AI tools. And increasingly, there is no reliable way to tell the difference.
Humans have relied on their senses as the ultimate arbiter of reality for as long as we have been human. Photography extended that reliance — a photograph was understood as a record of something that happened. Video reinforced it further. The phrase "the camera doesn't lie" captured a deep assumption: visual evidence is trustworthy.
Deepfakes and synthetic media are destroying that assumption. AI-generated video, audio, and images are now so convincing that forensic experts struggle to distinguish them from reality. Detection tools exist but are in an arms race with generation tools — and generation is winning. The C2PA provenance standard, watermarking, and forensic analysis all have fundamental limitations. The asymmetry is structural: it will always be easier to generate convincing fakes than to verify every piece of content.
The consequences extend beyond obvious deception. The "liar's dividend" — the ability of anyone caught on genuine evidence to claim it was fabricated — is perhaps more damaging than the fakes themselves. When nothing can be definitively proven real, everything can be plausibly denied. This corrodes the evidentiary foundation that journalism, courts, elections, and personal trust depend on.
The information environment compounds the problem. AI-generated text can produce convincing articles, social media posts, and comments at scale. Recommendation algorithms surface content based on engagement, not accuracy. The result is an information ecosystem optimized for attention rather than truth, in which synthetic content competes with real content on equal footing — or better, since synthetic content can be optimized to be more engaging.
The book's Deception, Manipulation, and Convenient Lies framework — developed partly through Ex Machina's exploration of how AI can manipulate human trust — anticipated this terrain. Ava's manipulation of Caleb in the film is precisely the kind of deception that AI-generated media enables at scale: convincing performance that is designed to produce a specific response in the observer.
Science, Belief, and Ways of Knowing — explored through Contact — becomes urgently practical. The tension between faith and evidence that Contact dramatizes takes on new meaning when the nature of evidence itself is compromised. If we can no longer trust our eyes and ears, what can we trust? The book's answer — that rigorous, assumption-counting thinking (Occam's Razor from Hype vs. Reality) is our best tool — becomes not a philosophical principle but a survival skill.
This is perhaps the question where the book's frameworks are most needed, because the instinctive human responses — trust nothing, trust everything, or check out entirely — are all inadequate. The book's voice would be: slow down, count the assumptions, ask who benefits from this being believed, and maintain the discipline of evidence even when evidence itself is under assault.