Everyone seems to agree that AI needs some form of governance. The EU has passed the AI Act. The US has issued executive orders. China has its own regulatory framework. But the question that haunts every effort is whether regulation will strangle the genuine benefits — medical breakthroughs, scientific discovery, productivity gains, accessibility tools — while failing to prevent the genuine harms.
The difficulty is structural, and the book names it precisely: the Collingridge dilemma. Early in a technology's development, regulation is easy because the technology is malleable — but we do not yet understand it well enough to know what to regulate. Later, when the consequences become clear, regulation is hard because the technology is entrenched in systems, markets, and habits.
AI sits in an awkward middle stage. We know enough to identify serious risks — algorithmic bias, labor displacement, deepfakes, concentration of power, biosecurity — but the technology is evolving so rapidly that regulations written today may be irrelevant or counterproductive by the time they are implemented. The EU AI Act, the most comprehensive AI regulation to date, was negotiated over years during which the technology changed fundamentally. It was largely designed for a pre-ChatGPT world.
The interest landscape is also complex. Technology companies argue that heavy regulation will push innovation to less regulated jurisdictions and consolidate power in established players who can afford compliance costs. Civil society organizations argue that self-regulation by technology companies has consistently failed. Governments are torn between competitiveness (wanting their domestic AI industry to lead) and protection (wanting to shield their populations from harm).
The book's Responsible Innovation in Practice framework offers something more nuanced than "regulate" or "don't regulate." It argues for embedding ethical considerations into the innovation process itself — not as an afterthought or an external constraint, but as a core part of how technologies are developed. This is distinct from regulation, which operates after the fact, and from self-regulation, which operates at the discretion of the innovator.
The Risk and Innovation framework also helps. The book recognizes that innovation inherently involves risk, and that attempting to eliminate all risk also eliminates innovation. The question is not whether AI should be risk-free — it should not and cannot be — but how to manage risk proportionally, transparently, and with accountability.
Could We? Should We? reframes the regulation debate. The question is not just "how do we regulate AI?" but "what kind of AI development do we want, and what are we willing to accept to get it?" That is a democratic question, not a technical one. See Why does it feel like nobody asked me about any of this?