Self-driving cars have been "five years away" for over a decade. That running joke conceals a genuinely complex story about what happens when a transformative technology meets the messiness of the real world — a story the book's frameworks are uniquely suited to illuminate.
In 2018, autonomous vehicles were widely expected to be commonplace by the mid-2020s. That has not happened, but it has not failed either. The field has split into distinct approaches, each with different philosophies about what autonomy requires.
Waymo, a subsidiary of Alphabet, has taken a sensor-heavy approach — lidar, radar, cameras — and operates commercial robotaxi services in several US cities. Its vehicles have driven millions of miles without a human behind the wheel, compiling a safety record that, statistically, appears better than the average human driver in the areas where they operate. Zoox, owned by Amazon, is building purpose-designed vehicles with no steering wheel, intended for dense urban environments. Tesla has taken a radically different path, relying primarily on cameras and AI-based computer vision, betting that sufficiently advanced software can do what other companies achieve with more expensive hardware. Chinese companies including Baidu's Apollo and several startups are pursuing their own variants.
The reality in 2025-2026 is that autonomous vehicles exist and work, but in constrained environments. The gap between "works in mapped cities with good weather" and "works everywhere humans drive" remains significant. Each company's approach carries different risk profiles, and the hype cycle has been one of the most dramatic examples of the pattern the book describes in Hype vs. Reality.
Autonomous vehicles are a remarkably clean case study for several of the book's frameworks simultaneously.
The trolley problem — a staple of self-driving car ethics discussions — is real but largely the wrong framing, and the book's emphasis on Complexity and Unintended Consequences explains why. Real autonomous vehicle ethics is less about "who should the car hit?" and more about systemic questions: how safe is safe enough? If autonomous vehicles are statistically safer than human drivers but occasionally fail in ways humans would not, is that an acceptable trade? Who sets the threshold?
The liability question is genuinely novel. When a human driver causes an accident, liability is relatively clear. When an autonomous system causes one, the chain of responsibility splinters — the software developer, the car manufacturer, the mapping company, the regulatory body that certified it. Existing legal frameworks are poorly equipped for this.
Labor displacement is not hypothetical. Long-haul trucking, taxi driving, and ride-hailing employ millions of people globally. The book's discussion of automation and the "disposable workforce" in Elysium anticipated exactly this kind of transition — and the social question of what happens to people whose livelihoods depend on driving is as important as the technology itself.
And the permissionless innovation dimension is stark. Companies like Tesla have effectively used their customers as beta testers for autonomous driving technology on public roads — a decision that the book's framework for Permissionless Innovation would scrutinize closely.