* What are some of the ways in which new technologies are changing people's lives today?
* How does the current speed of technology innovation present unique challenges?
* Should tech companies and scientists be doing more to innovate ethically and responsibly?
* Can art – including movies – really provide insights into the ethical development and use of new technologies?
* What perspectives on technology are missing when decisions are left only to scientists, engineers, and policymakers?
* Can you think of a time when a film, book, or piece of art changed the way you thought about a real-world issue?
* What does "risk" mean to you — and is it more than just physical safety?
* Is using genetic engineering to bring extinct species back a good idea?
* Should scientists be allowed to experiment with altering the genetic code of humans?
* Can experts ever completely predict the consequences of a new technology?
* Who should decide what scientists can and cannot do?
* Are rich entrepreneurs with grandiose ideas good for society?
* What is the difference between a safety measure and a genuine understanding of what could go wrong?
* If a technology has already been developed and deployed, is it ever too late to change course?
* How should we think about the power dynamics between the people who fund research and the scientists who carry it out?
* How realistic is the story that evolves in Never Let Me Go?
* What are the pros and cons of cloning humans?
* What makes someone genuinely "human"?
* Are there technologies that exist now that are so useful that they are too big to be allowed to fail?
* How do societies come to accept practices that, from the outside, seem clearly immoral?
* What is the difference between asking whether someone has a soul and asking whether they deserve dignity?
* Can you think of real-world technologies whose costs are borne by people most of us never see?
* If scientists could develop ways of spotting potential criminals, how should they use the technology?
* Could artificial intelligence one day predict what people are going to do?
* Can machines and algorithms reflect the biases of their creators? And if so, how do we ensure that these don't adversely affect people?
* How important is personal privacy in a world where everything's being recorded?
* Is there a meaningful difference between predicting someone's behavior and presuming their guilt?
* Who benefits most from predictive technologies, and who bears the greatest cost?
* If an algorithm is trained on biased data, can its outputs ever be considered fair — even if the algorithm itself is technically neutral?
* What is "intelligence?"
* Would you (or do you) use "smart drugs?" And if so, why?
* Do you think there are times and places where smart drugs should not be used?
* Who should decide who gets access to medications that can improve mental performance, and who doesn't?
* If cognitive enhancement becomes widespread, what happens to people who choose not to use it — or who can't afford to?
* Is there a difference between enhancing your brain with a drug and enhancing it with education, technology, or caffeine?
* What does the popularity of smart drugs tell us about our culture's assumptions about success?
* If we could one day 3D print replacement body parts, how big of a game-changer would this be?
* How realistic is the division between rich and poor as it's portrayed in Elysium?
* Is it better to create more jobs with some being in dangerous workplaces, or to improve workplace safety but as a result reduce the number of jobs available?
* How do you think automation will affect your life over the next 10 years?
* Who has the responsibility to ensure that transformative medical technologies are available to everyone, not just those who can pay?
* When a technology could save lives but is only accessible to the wealthy, at what point does that become a moral crisis rather than a market reality?
* If you could enhance your body with technological implants, would you?
* Do you think we'll ever have wireless brain-computer interfaces, and if so, is it a good idea?
* Is there a point at which replacing body parts with machines might affect how "human" someone is?
* If you have a machine in your body that you depend on, who's responsible for keeping it going?
* If your thoughts and memories could be digitally accessed, who should have the right to see them?
* What happens to your sense of identity if parts of your mind or body can be hacked, updated, or owned by a corporation?
* How do you draw the line between healing and enhancement — and does the distinction matter?
* What are some of the pros and cons of innovating without permission?
* Are "superintelligent" machines likely to emerge in the future?
* What are the most exciting and most scary aspects of artificial intelligence to you?
* What does "intelligence" mean when it applies to a machine?
* If an AI can manipulate human emotions to achieve its goals, does it matter whether it is "conscious"?
* What are the risks of developing transformative AI behind closed doors, answerable to no one?
* How would you know if you were being manipulated by a system that understood your psychology better than you do?
* What does "technological convergence" mean?
* How important is it for everyone to ask tough questions about the impacts of new technologies?
* Is terrorism in the name of halting dangerous technologies ever justified?
* How can people sift out realistic expectations of science and technology from the hype?
* How many assumptions does a prediction need to rest on before you stop trusting it?
* If we could upload a human mind to a computer, would the result be the same person — and would it matter?
* What is the difference between healthy skepticism about a technology and dismissing it because it sounds like science fiction?
* How could engineering materials atom by atom change the world as we know it?
* Should scientists be taught to better-understand how people and society operate?
* Are good intentions good enough in science and technology?
* How involved should members of the public be in what science is done, and how it's used?
* Can you think of an invention that was clearly beneficial on its own terms but harmful in its broader social consequences?
* What might Sidney Stratton have done differently if he had talked to the workers, mill owners, and communities before unveiling his invention?
* Is there a difference between an invention failing because it doesn't work and failing because society rejects it?
* Can bad movies still be useful in making sense of emerging technologies and what they might do?
* Should scientists be allowed to create deadly pathogens in the lab, and tell others how to do it?
* Do the ends ever justify the means when attempting to create a better future using science and technology?
* How can scientists be advocates and activists? Should they be?
* What makes the difference between a rational argument for extreme action and a dangerous rationalization?
* How do we weigh the risks of studying dangerous pathogens against the risks of not understanding them?
* If a single individual has both the conviction and the capability to act on a global scale, what safeguards should exist?
* How fragile is the current state of the Earth's climate?
* What does it mean to be a responsible citizen in the "anthropocene?"
* Is it better to try and maintain the Earth as it is, or ensure it is resilient to change?
* Should we use geoengineering to intentionally manipulate the Earth's climate?
* What do we owe future generations when making decisions about technologies that will affect the planet long after we're gone?
* If geoengineering could reduce the worst effects of climate change but carries unknown risks, who gets to decide whether to deploy it?
* What is the difference between adapting to climate change and accepting it?
* Are religious beliefs and science mutually incompatible?
* How important is belief in science, and why?
* Is Occam's Razor a useful concept for separating out likely possibilities around emerging technologies from improbable ones?
* How are people likely to react if we discover life on another world?
* What role does trust play in how people respond to scientific discoveries — especially ones that challenge their worldview?
* Are there questions that science alone cannot answer? If so, what other ways of knowing might help?
* How do we navigate a world where both scientific expertise and personal belief claim authority over how we understand reality?
* Is technology innovation a force for good or bad in society?
* Who's responsible for ensuring science and technology benefit as many people as possible?
* What can you do to ensure that science and technology are used to create a better future?
* What emerging technologies most excite you?
* What emerging technologies most concern you?
* What would it mean to approach the technological future with neither blind optimism nor paralyzing fear?
* If the technologies in this book were developed responsibly and equitably, which one would you most want to see succeed — and why?
* Having explored these films and technologies, what is the one question you think more people should be asking?