One of the most persistent arguments in Films from the Future runs against the grain of how technology decisions are usually made. It says, in effect: the questions raised by emerging technologies are too important to leave solely to scientists, innovators, and politicians. We all have a role to play. In fact, the people most easily overlooked -- those furthest from the laboratories and boardrooms where technologies are developed -- may be the ones whose perspectives matter most.
This is not a throwaway line in the book. It appears in the opening chapter, threads through the analysis of The Man in the White Suit, and returns with force in the final chapter, where Maynard calls the tendency to leave these decisions to experts an "abdication of responsibility."
The Man in the White Suit is perhaps the book's most pointed illustration of what happens when innovators do not talk to the people their work affects. Sidney Stratton, the film's brilliant but socially oblivious scientist, invents a fabric that never wears out and never needs washing. In his mind, this is a gift to humanity. He never thinks to ask anyone else what they think.
The result is instructive. The textile industry realizes the invention would destroy their business. Workers realize they would lose their jobs. Even Stratton's landlady asks plaintively why scientists cannot leave things alone -- who needs a scientist when there is no washing to do? Stratton's invention is not defeated by bad science. It is defeated by bad assumptions about what people want and need.
This pattern appears throughout the book. In Jurassic Park, John Hammond builds his park without meaningful input from the people it will affect. In Ex Machina, Nathan Bateman conducts his AI experiments in isolation, accountable to no one. In Transcendence, Will Caster's growing power is shaped entirely by his own vision and his wife's ambitions. In each case, the innovator's confidence in their own judgment proves insufficient.
There is a strong temptation in technology governance to defer to expertise. And expertise matters -- you cannot expect a random person to safely engineer organisms or design aircraft. The book is clear about this. But there is a crucial distinction between the technical skill needed to build something and the collective wisdom needed to decide whether, how, and for whom it should be built.
One thing we are all qualified to do, Maynard argues, is think about what the possible consequences of technology innovation might mean to us and the people we care for. And here, pretty much everyone has something to contribute. A factory worker facing automation has insights about the social impact of AI that no computer scientist possesses. A parent in a low-income neighborhood has perspectives on genetic testing that bioethicists in well-funded universities may never develop. A community elder whose way of life is threatened by industrial agriculture understands something about what is at stake that no policy paper can capture.
This is not sentimentality. It is a practical argument rooted in the book's treatment of risk innovation. If risk is about threats to what people value -- not just to their physical safety but to their dignity, identity, and way of life -- then the people best positioned to identify those risks are the ones whose values are on the line.
This is where the book's argument about why sci-fi movies matter becomes more than an aesthetic claim. Science fiction movies remove barriers to participation. Every film in the book can be appreciated by someone who never finished school as much as by a Nobel Prize winner. Because of this, they are tremendously powerful for getting people from very different backgrounds thinking and talking together about questions that otherwise remain locked behind walls of jargon and credential.
At its best, science fiction creates a shared reference point -- a common starting place for conversations that might otherwise never happen. When a community watches Jurassic Park and starts talking about what happens when technology escapes control, or watches Contact and debates the relationship between evidence and belief, something valuable is happening. People who are normally excluded from technology governance are finding their way into the conversation.
The flip side of everyone having a role is that everyone has a responsibility. The book is direct about this: we collectively need to give a damn about the future we are creating. It is not enough to hope that scientists and technologists will act responsibly. Responsibility means that we engage, that we ask questions, that we refuse to be passive consumers of whatever future someone else decides to build for us.
This connects to the book's call for responsible innovation that goes beyond frameworks and policies to become a genuine social practice. It connects to the argument for resilience, because diverse perspectives make communities better prepared for surprise. And it connects to the book's final, hopeful insistence that we have the collective ability to develop technologies in ways that work for us, not against us -- if we are willing to show up and participate.
The future is not something that happens to us. It is something we make. And making it well requires every perspective we can get.