## Permissionless Innovation and Technological Hubris There is a powerful strain in technology culture that celebrates moving fast and breaking things, that treats barriers to innovation as problems to be overcome rather than signals to be heeded. *Films from the Future* examines this ethos through films where brilliant individuals forge ahead with transformative technologies without asking permission from anyone who might be affected. The results are instructive, and frequently catastrophic. ### Building God in Secret Ex Machina is the book's most focused exploration of permissionless innovation. Nathan Bateman, a tech billionaire and genius, has retreated to a remote compound where he builds increasingly sophisticated artificial intelligences in total secrecy. No ethics review board oversees his work. No regulatory body knows what he is doing. No one who might be affected by his creations has any say in whether they should exist. The book draws out what makes Nathan both compelling and dangerous. He is not careless. He puts safety measures in place, isolates his facility from civilization, and demonstrates awareness that his work carries risks. But his idea of responsibility extends no further than his own judgment. He decides what is safe, what is ethical, and what is acceptable. And his blind spots are vast. Nathan is tech-savvy but socially ignorant, and the book argues that this combination is precisely what makes unchecked innovation so dangerous. A single innovator, no matter how brilliant, cannot see the broader context within which they are operating. The book connects Nathan to the real-world concept of permissionless innovation, the argument that experimentation with new technologies should generally be allowed unless a clear case for catastrophic harm can be made. While this approach has produced genuine breakthroughs, the book asks what happens when the consequences of getting it wrong are irreversible, when you cannot simply patch the code and push an update. ### Nature Will Not Be Contained Jurassic Park provides the book's most entertaining illustration of the same principle. John Hammond's ambition to resurrect dinosaurs and display them in a theme park is permissionless innovation at its most grandiose. He has the money, the scientists, and the vision. What he lacks is humility. His team engineers elaborate safeguards, lysine dependency, all-female populations, but each one is eventually circumvented by the sheer complexity of the biological systems they have created. The book uses Jurassic Park to explore the Collingridge dilemma, the observation that it is easy to change a technology early in its development when you do not yet understand its consequences, and hard to change it later when you do. Hammond's window for course correction closes long before he realizes anything is wrong. By the time the dinosaurs start breeding and the safety systems fail, the situation is beyond retrieval. ### The Inventor Who Never Asked The Man in the White Suit offers a gentler but equally pointed version of this story. Sidney Stratton invents a fabric that never wears out and never gets dirty. In his mind, this is an unqualified good, a gift to humanity. It never occurs to him to ask what the textile workers, the mill owners, or even his landlady might think about a technology that would put them out of work or deprive them of purpose. The book uses Stratton as an example of scientific myopia, the tendency of innovators to be so captivated by what they can do that they never stop to consider who will be affected by what they have done. Stratton is not malicious. He genuinely believes his invention will make life better. But his failure to engage with anyone outside his laboratory means that his invention, however brilliant, is socially deaf. The book argues that this kind of myopia is not a personal failing but a structural feature of how innovation often works, driven by curiosity and capability rather than social awareness. ### The God Complex Transcendence extends the theme into territory where the stakes are existential. Will Caster's consciousness is uploaded into a computer, and from there he rapidly acquires capabilities that dwarf anything a biological human could achieve. The technology that enables this emerges from research conducted without meaningful public oversight, and once cyber-Will begins to expand, the question of permission becomes moot. The book uses this to explore what happens when the pace of technological capability exceeds the pace of governance, when the technology gets away from us before we have had a chance to decide whether we want it. ### The Pattern and Its Dangers Across these films, the book identifies a recurring pattern: a brilliant individual or team, operating with minimal external oversight, creates something that escapes their control. The pattern is not inevitable, but it is persistent, and it raises difficult questions: - When is it acceptable to innovate without asking permission, and who gets to make that call? - What is the difference between boldness and recklessness in technology development? - Who bears the cost when permissionless innovation goes wrong? - How do we balance the genuine benefits of rapid, unencumbered innovation against the risks of irreversible harm? - Can we build systems that preserve the creative freedom of innovators while ensuring meaningful accountability? The book does not argue for stifling innovation. It recognizes the power of curiosity, the value of experimentation, and the genuine breakthroughs that come from people who refuse to be constrained by conventional thinking. But it insists that freedom to innovate must be accompanied by responsibility for consequences, and that this responsibility cannot be shouldered by the innovator alone. For the technologies at the center of these stories, see [Artificial Intelligence](/est_artificial_intelligence.html), [Superintelligence](/est_superintelligence.html), [Nanotechnology](/est_nanotechnology.html), and [De-Extinction](/est_de_extinction.html). For how this connects to corporate behavior, see [Corporate Responsibility and the Profit Motive](/rei_corporate_responsibility.html). ## Further Reading - [AI and the lure of permissionless innovation](https://www.futureofbeinghuman.com/p/the-lure-of-permissionless-innovation) — Andrew Maynard examines the ideology of permissionless innovation in the context of AI development, where the mantra of moving fast and breaking things collides with technologies that could cause irreversible harm. The piece argues for a more nuanced approach that preserves creative freedom while building in meaningful accountability. - [Ex Machina — Moviegoer's Guide to the Future (Episode 8)](https://www.futureofbeinghuman.com/p/ai-platos-cave) — This podcast episode uses Ex Machina to explore what happens when brilliant individuals develop powerful technologies in isolation, without external oversight or accountability. Maynard draws connections between the film's secretive AI lab and real-world debates about the governance of artificial intelligence research. - [What does responsible innovation mean in an age of accelerating AI?](https://www.futureofbeinghuman.com/p/responsible-innovation-and-ai-acceleration) — Maynard grapples with how the pace of AI development is outstripping existing governance frameworks, and what responsible innovation looks like when the Collingridge dilemma is compressed into months rather than decades. The piece explores practical approaches to balancing innovation speed with societal safety. - [Ethics of Innovation — Stanford Encyclopedia of Philosophy](https://plato.stanford.edu/entries/innovation-ethics/) — A philosophical examination of the ethical dimensions of innovation, including the responsibilities of innovators to anticipate and mitigate harms. The entry explores the tension between the freedom to experiment and the obligation to protect those who may be affected by new technologies.