Educators Guide: Films from the Future in the Classroom and Beyond

How to Use This Guide

This guide expands on the discussion questions in Films from the Future by Andrew Maynard, adapting them for use across a range of educational contexts, from high school classrooms to executive boardrooms.

The guide is organized around four audience tiers:

Materials are organized both by film chapter (Part One) and by cross-cutting theme (Part Two), so educators can use the book sequentially or thematically. Part Three provides workshop and course formats for different time constraints.

Each section includes the book's original seed questions, tiered discussion questions for all four audiences, and at least one activity that can be adapted across levels. Connections to the thematic index are noted throughout so students and participants can go deeper on any topic.

A note on pedagogical philosophy: the book's greatest strength is its willingness to hold complexity rather than resolve it. The scaffolding in this guide is designed to lower entry barriers without flattening the questions. The goal is not to arrive at correct answers but to develop the capacity to sit with difficult questions and think them through from multiple perspectives.

Andrew Maynard has taught the ASU undergraduate course "The Moviegoer's Guide to the Future" (FIS 338) using these films since 2017, refining these questions over seven years of classroom use. This guide builds on that experience.


Quick Start: Choosing Your Entry Point


Understanding the Audience Tiers

Secondary (Grades 8-12)

Undergraduate

Graduate/Professional

Executive/Leadership


PART ONE: BY FILM CHAPTER


Chapter 1: In the Beginning (2001: A Space Odyssey)

Core themes: Why sci-fi matters for technology ethics, risk as threat to what we value, responsible innovation

Seed questions:

* What are some of the ways in which new technologies are changing people's lives today?

* How does the current speed of technology innovation present unique challenges?

* Should tech companies and scientists be doing more to innovate ethically and responsibly?

* Can art – including movies – really provide insights into the ethical development and use of new technologies?

* What perspectives on technology are missing when decisions are left only to scientists, engineers, and policymakers?

* Can you think of a time when a film, book, or piece of art changed the way you thought about a real-world issue?

* What does "risk" mean to you — and is it more than just physical safety?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Risk Landscape Exercise

All levels, adapted by complexity

Participants map a technology they know onto a "risk landscape" -- identifying not just physical risks but threats to dignity, autonomy, identity, belonging, trust, and belief. Secondary students do this for social media. Undergrads for a technology from the book. Graduate students compare two technologies. Executives map their own organization's product or service.

Format: Draw a circle in the center of a page with the technology name. Around it, place six domains: Physical Safety, Dignity, Autonomy, Identity, Belonging, and Trust. For each domain, identify specific risks the technology poses. Then rank them: Which risks are most severe? Which are most neglected? Which does the developer probably not even see?

Debrief questions: Which risks were easiest to identify? Which required the most thought? What does the pattern tell you about how we typically think about risk versus how we should think about risk?


Chapter 2: Jurassic Park (1993) -- Genetic Engineering

Core themes: "Could we? Should we?", complexity and chaos, power dynamics, the limits of prediction

Seed questions:

* Is using genetic engineering to bring extinct species back a good idea?

* Should scientists be allowed to experiment with altering the genetic code of humans?

* Can experts ever completely predict the consequences of a new technology?

* Who should decide what scientists can and cannot do?

* Are rich entrepreneurs with grandiose ideas good for society?

* What is the difference between a safety measure and a genuine understanding of what could go wrong?

* If a technology has already been developed and deployed, is it ever too late to change course?

* How should we think about the power dynamics between the people who fund research and the scientists who carry it out?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Assumption Chain

All levels

Take any technology prediction (the singularity, full de-extinction, flying cars, AGI by 2030). List every assumption that must be true for the prediction to come true. Rate each assumption's plausibility. Multiply the probabilities. This teaches Occam's Razor as a practical tool.

Format: Secondary students work in pairs with 3-4 assumptions and simple high/medium/low ratings. Undergraduates list 6-8 assumptions with percentage estimates and brief justifications. Graduate students build a full chain with literature support and identify which assumptions are empirical claims versus value judgments.

Debrief: The point is not to dismiss predictions but to build the habit of asking "What would have to be true?" before accepting any forecast at face value.


Chapter 3: Never Let Me Go (2010) -- Human Cloning

Core themes: Human dignity, "too valuable to fail," the "convenient lie," who counts as human

Seed questions:

* How realistic is the story that evolves in Never Let Me Go?

* What are the pros and cons of cloning humans?

* What makes someone genuinely "human"?

* Are there technologies that exist now that are so useful that they are too big to be allowed to fail?

* How do societies come to accept practices that, from the outside, seem clearly immoral?

* What is the difference between asking whether someone has a soul and asking whether they deserve dignity?

* Can you think of real-world technologies whose costs are borne by people most of us never see?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Convenient Lie Audit

Undergraduate and above

Students identify a technology or system they depend on and investigate its hidden costs. Who benefits? Who bears the burden? What story does society tell itself to justify the arrangement?

Format: Present findings in a structured format with five components: (1) the technology or system, (2) the benefit it provides, (3) the hidden cost, (4) the convenient lie that sustains the arrangement, and (5) who is harmed. This can be a research paper, a presentation, or a structured debate.

Debrief: The exercise is not about guilt but about visibility. The question is not "should we stop using everything?" but "what would it take to see clearly, and what would we do differently if we did?"


Chapter 4: Minority Report (2002) -- Predictive Technology

Core themes: Surveillance, algorithmic bias, privacy, the limits of prediction, pre-crime

Seed questions:

* If scientists could develop ways of spotting potential criminals, how should they use the technology?

* Could artificial intelligence one day predict what people are going to do?

* Can machines and algorithms reflect the biases of their creators? And if so, how do we ensure that these don't adversely affect people?

* How important is personal privacy in a world where everything's being recorded?

* Is there a meaningful difference between predicting someone's behavior and presuming their guilt?

* Who benefits most from predictive technologies, and who bears the greatest cost?

* If an algorithm is trained on biased data, can its outputs ever be considered fair — even if the algorithm itself is technically neutral?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Bias Audit

All levels, adapted

Students are given a simplified dataset (can be fictional) and asked to build a simple prediction rule. They then test it against different demographic groups and discover disparate impact.

Format: Secondary students use a classroom-appropriate scenario (predicting which students will enjoy a field trip based on past attendance, grades, and after-school activities -- then discover the rule penalizes students who work after school). Undergraduates use a hiring or admissions dataset. Graduate students use actual recidivism or predictive policing data with published disparate impact findings.

Debrief: The exercise teaches that "neutral" algorithms applied to biased data produce biased outcomes -- and that the bias is often invisible until you deliberately look for it across groups.


Chapter 5: Limitless (2011) -- Cognitive Enhancement

Core themes: Intelligence and its definition, enhancement vs. therapy, normalization pressure, equity

Seed questions:

* What is "intelligence?"

* Would you (or do you) use "smart drugs?" And if so, why?

* Do you think there are times and places where smart drugs should not be used?

* Who should decide who gets access to medications that can improve mental performance, and who doesn't?

* If cognitive enhancement becomes widespread, what happens to people who choose not to use it — or who can't afford to?

* Is there a difference between enhancing your brain with a drug and enhancing it with education, technology, or caffeine?

* What does the popularity of smart drugs tell us about our culture's assumptions about success?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Enhancement Spectrum

All levels

Draw a spectrum from "clearly therapy" to "clearly enhancement" with a gray zone in between. Place technologies along it: hearing aids, LASIK, Adderall for ADHD, Adderall for studying, caffeine, tutoring, brain-computer interfaces, genetic selection of embryos.

Format: Secondary students work in pairs with physical cards they can arrange and rearrange, discussing placement as they go. Undergraduates write brief justifications for each placement. Graduate students reference bioethics literature and identify where the boundary has shifted historically. Executives relate each placement to their industry context.

Debrief: Discuss where the line falls and why it keeps moving. The exercise reveals that the therapy/enhancement distinction is not a bright line but a culturally negotiated boundary -- and that where you draw it depends on what you value.


Chapter 6: Elysium (2013) -- Social Inequity and Technology

Core themes: Technology amplifying inequality, disposable workforce, access to healthcare, automation

Seed questions:

* If we could one day 3D print replacement body parts, how big of a game-changer would this be?

* How realistic is the division between rich and poor as it's portrayed in Elysium?

* Is it better to create more jobs with some being in dangerous workplaces, or to improve workplace safety but as a result reduce the number of jobs available?

* How do you think automation will affect your life over the next 10 years?

* Who has the responsibility to ensure that transformative medical technologies are available to everyone, not just those who can pay?

* When a technology could save lives but is only accessible to the wealthy, at what point does that become a moral crisis rather than a market reality?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Two-Tier Scenario

Undergraduate and above

Students design a fictional technology with transformative potential (life extension, perfect memory, disease immunity). Then they model two deployment scenarios: market-driven (highest bidder first) and equity-driven (universal access). They map the social consequences of each over 10, 25, and 50 years.

Format: Undergraduate students work in groups of four, with two groups modeling each scenario and then comparing results. Graduate students add governance mechanisms to the equity-driven scenario and stress-test them. Executives adapt the exercise for their own industry, modeling what happens when their product follows each pathway.

Debrief: The exercise makes visible how small initial differences in access compound over time. The question is not whether markets or equity should win, but what mechanisms exist to prevent compounding inequality from becoming irreversible.


Chapter 7: Ghost in the Shell (1995) -- Human Augmentation

Core themes: Identity when body becomes machine, corporate ownership, cybersecurity, diversity

Seed questions:

* If you could enhance your body with technological implants, would you?

* Do you think we'll ever have wireless brain-computer interfaces, and if so, is it a good idea?

* Is there a point at which replacing body parts with machines might affect how "human" someone is?

* If you have a machine in your body that you depend on, who's responsible for keeping it going?

* If your thoughts and memories could be digitally accessed, who should have the right to see them?

* What happens to your sense of identity if parts of your mind or body can be hacked, updated, or owned by a corporation?

* How do you draw the line between healing and enhancement — and does the distinction matter?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Ownership Dilemma

All levels

Present a scenario: A person has a brain-computer interface made by Company X. Company X is acquired by Company Y, which has different data policies. The interface needs regular software updates to function.

Students work through: Who owns the data? Who controls the updates? What happens if the user wants to switch providers? What rights does the user have?

Format: Secondary students discuss in pairs and write a position statement. Undergraduates draft a policy proposal. Graduate students produce a legal and ethical analysis drawing on existing frameworks. Executives develop a risk assessment and governance framework for their own organization.

Debrief: The exercise reveals that current ownership frameworks (designed for external products) break down when the product is inside a person's body. What new frameworks are needed?


Chapter 8: Ex Machina (2014) -- Artificial Intelligence

Core themes: Permissionless innovation, manipulation, Plato's Cave, the imaginable vs. the plausible

Seed questions:

* What are some of the pros and cons of innovating without permission?

* Are "superintelligent" machines likely to emerge in the future?

* What are the most exciting and most scary aspects of artificial intelligence to you?

* What does "intelligence" mean when it applies to a machine?

* If an AI can manipulate human emotions to achieve its goals, does it matter whether it is "conscious"?

* What are the risks of developing transformative AI behind closed doors, answerable to no one?

* How would you know if you were being manipulated by a system that understood your psychology better than you do?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Manipulation Detection Exercise

Undergraduate and above

Students interact with a series of AI-generated texts, chatbot conversations, or recommendation feeds. They try to identify: What is the system optimizing for? How is it leveraging my cognitive biases? What information is it withholding? What would I need to know to make a truly informed choice?

Format: Provide 4-5 examples ranging from obvious (a clickbait headline) to subtle (a chatbot that gradually shifts the user's preferences through conversational framing). Students analyze each example individually, then discuss in groups.

Debrief: Connect to Plato's Cave and the chapter's argument about epistemic vulnerability. The question is not whether AI can manipulate us -- it already does. The question is what structures of awareness and accountability we need.


Chapter 9: Transcendence (2014) -- The Singularity

Core themes: Technological convergence, exponential extrapolation, hype vs. reality, anti-technology extremism

Seed questions:

* What does "technological convergence" mean?

* How important is it for everyone to ask tough questions about the impacts of new technologies?

* Is terrorism in the name of halting dangerous technologies ever justified?

* How can people sift out realistic expectations of science and technology from the hype?

* How many assumptions does a prediction need to rest on before you stop trusting it?

* If we could upload a human mind to a computer, would the result be the same person — and would it matter?

* What is the difference between healthy skepticism about a technology and dismissing it because it sounds like science fiction?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Assumption Stack

All levels -- the signature exercise for this chapter

Take a bold technology prediction (AGI by 2030, human-level mind uploading, full de-extinction of mammoths). Stack every assumption it depends on. Assign a probability to each. Multiply them.

Format: Secondary students work with 4-5 assumptions and simple high/medium/low probability ratings. Undergraduates list 8-10 assumptions with percentage estimates and evidence for each. Graduate students build a full chain with literature review and must distinguish between empirical assumptions and value assumptions.

Debrief: The exercise viscerally demonstrates why predictions requiring many simultaneous breakthroughs are less reliable than they appear. Not about dismissing predictions but about calibrating confidence -- and about understanding the difference between what is imaginable, what is plausible, and what is probable.


Chapter 10: The Man in the White Suit (1951) -- Nanotechnology

Core themes: The socially oblivious scientist, stakeholder engagement, innovation and social disruption

Seed questions:

* How could engineering materials atom by atom change the world as we know it?

* Should scientists be taught to better-understand how people and society operate?

* Are good intentions good enough in science and technology?

* How involved should members of the public be in what science is done, and how it's used?

* Can you think of an invention that was clearly beneficial on its own terms but harmful in its broader social consequences?

* What might Sidney Stratton have done differently if he had talked to the workers, mill owners, and communities before unveiling his invention?

* Is there a difference between an invention failing because it doesn't work and failing because society rejects it?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Stratton Exercise

Undergraduate and above

Students are assigned the role of Sidney Stratton -- they have a genuinely beneficial invention. But before unveiling it, they must identify every stakeholder group that would be affected, predict each group's response, and design an engagement process.

Format: The twist -- other students play the stakeholder groups and respond in character. Each stakeholder group receives a brief that includes their economic interests, their values, and their concerns. The "inventor" must present to them all and negotiate a path forward.

Debrief: Focuses on what the inventor learned that they would have missed working alone. The exercise makes viscerally clear that technical brilliance without social awareness produces avoidable failures -- and that engagement is not an obstacle to innovation but a condition for its success.


Chapter 11: Inferno (2016) -- Biosecurity and Dual-Use Research

Core themes: "Immoral logic," the honest broker, dual-use dilemma, ends vs. means

Seed questions:

* Can bad movies still be useful in making sense of emerging technologies and what they might do?

* Should scientists be allowed to create deadly pathogens in the lab, and tell others how to do it?

* Do the ends ever justify the means when attempting to create a better future using science and technology?

* How can scientists be advocates and activists? Should they be?

* What makes the difference between a rational argument for extreme action and a dangerous rationalization?

* How do we weigh the risks of studying dangerous pathogens against the risks of not understanding them?

* If a single individual has both the conviction and the capability to act on a global scale, what safeguards should exist?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Honest Broker Role Play

Undergraduate and above

Students are assigned one of Pielke's four roles (Pure Scientist, Science Arbiter, Issue Advocate, Honest Broker) and must respond to a policy question from their assigned position.

Format: The policy question -- "Should gain-of-function research on H5N1 continue?" -- is presented with a brief dossier of relevant facts. Each role receives additional guidance: the Pure Scientist focuses only on what is known; the Science Arbiter answers only questions asked; the Issue Advocate argues for their preferred outcome; the Honest Broker presents the full range of options with trade-offs.

Graduate version: Before participating, students must identify and disclose their own biases and explain how those biases might shape their performance in the assigned role.

Debrief: Explores which role best serves democratic decision-making and why the honest broker role is the hardest to sustain under pressure. What institutional structures would support honest brokering?


Chapter 12: The Day After Tomorrow (2004) -- Climate and Resilience

Core themes: Complex Earth systems, geoengineering, intergenerational responsibility, resilience

Seed questions:

* How fragile is the current state of the Earth's climate?

* What does it mean to be a responsible citizen in the "anthropocene?"

* Is it better to try and maintain the Earth as it is, or ensure it is resilient to change?

* Should we use geoengineering to intentionally manipulate the Earth's climate?

* What do we owe future generations when making decisions about technologies that will affect the planet long after we're gone?

* If geoengineering could reduce the worst effects of climate change but carries unknown risks, who gets to decide whether to deploy it?

* What is the difference between adapting to climate change and accepting it?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Resilience Audit

All levels

Students select a system they depend on (a city's power grid, a food supply chain, a social media platform, their university's IT infrastructure). They evaluate it against the four resilience types from the chapter: rebound (can it bounce back?), robustness (can it absorb shocks?), graceful extensibility (can it stretch beyond its designed capacity?), and sustained adaptability (can it transform in response to changing conditions?).

Format: Secondary students evaluate their school's systems (power, internet, food service). Undergraduates evaluate a municipal or organizational system. Graduate students evaluate a national or global system. Executives evaluate their own organization.

Debrief: Where is the system strong? Where is it brittle? Most systems are designed for rebound and robustness but not for graceful extensibility or sustained adaptability. What would it take to build the higher-order resilience types into the system?


Chapter 13: Contact (1997) -- Science, Belief, and Knowledge

Core themes: Science and faith, Occam's Razor, ways of knowing, the limits of empiricism

Seed questions:

* Are religious beliefs and science mutually incompatible?

* How important is belief in science, and why?

* Is Occam's Razor a useful concept for separating out likely possibilities around emerging technologies from improbable ones?

* How are people likely to react if we discover life on another world?

* What role does trust play in how people respond to scientific discoveries — especially ones that challenge their worldview?

* Are there questions that science alone cannot answer? If so, what other ways of knowing might help?

* How do we navigate a world where both scientific expertise and personal belief claim authority over how we understand reality?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Ways of Knowing Exercise

Undergraduate and above

Present a complex technology question (Should we pursue human germline editing? Should geoengineering be researched?). Have students approach it from three different frameworks: scientific evidence, ethical/philosophical reasoning, and personal/community values.

Format: Divide the room into three groups, each assigned one framework. Each group deliberates and presents their analysis. Then reconvene and attempt to integrate the three perspectives into a single recommendation.

Debrief: Each framework leads to a different answer or emphasis. These aren't competing answers but complementary perspectives that a complete governance process needs all of. The exercise makes visible what each way of knowing contributes and what it misses. Science provides evidence but not values. Ethics provides principles but not data. Community values provide legitimacy but not technical accuracy. All three are necessary; none is sufficient.


Chapter 14: Looking to the Future

Core themes: Don't Panic, agency, the privilege of renouncing technology, responsibility to continue innovating

Seed questions:

* Is technology innovation a force for good or bad in society?

* Who's responsible for ensuring science and technology benefit as many people as possible?

* What can you do to ensure that science and technology are used to create a better future?

* What emerging technologies most excite you?

* What emerging technologies most concern you?

* What would it mean to approach the technological future with neither blind optimism nor paralyzing fear?

* If the technologies in this book were developed responsibly and equitably, which one would you most want to see succeed — and why?

* Having explored these films and technologies, what is the one question you think more people should be asking?

Tiered questions:

Secondary:

Undergraduate:

Graduate/Professional:

Executive:

Activity: The Personal Technology Manifesto

All levels

Students write a brief personal statement (1 page) articulating four things: What do I value that technology could threaten? What do I value that technology could protect? What is my role in shaping the technological future? What is one commitment I'm making coming out of this experience?

Format: Secondary students write 3-4 sentences per question. Undergraduates write developed paragraphs with references to the book's frameworks. Graduate students connect to their professional context and identify specific actions. Executives frame as an action plan with timelines and accountability mechanisms.

Debrief: This is not a test -- there are no wrong answers. The exercise asks participants to move from analysis to commitment, from understanding frameworks to deciding how they will act. Share voluntarily. The most powerful versions are specific and honest rather than aspirational and vague.


PART TWO: BY CROSS-CUTTING THEME

This section reorganizes questions and activities by theme rather than film, for courses and workshops organized around topics rather than the book's chapter sequence. Each theme draws from multiple chapters above.


Theme A: The Ethics of Enhancement

Draws from: Limitless (Ch. 5), Ghost in the Shell (Ch. 7), Never Let Me Go (Ch. 3)

Central tension: When does helping become unfair advantage, and who decides?

Key questions across levels:

Recommended activity: The Enhancement Spectrum (from Ch. 5) combined with the Two-Tier Scenario (from Ch. 6). First map where the therapy/enhancement line falls, then model what happens when enhancement follows market-driven versus equity-driven deployment.


Theme B: Power, Access, and Who Benefits

Draws from: Elysium (Ch. 6), Limitless (Ch. 5), Jurassic Park (Ch. 2), Ghost in the Shell (Ch. 7)

Central tension: Technology amplifies existing power structures unless deliberately designed not to.

Key questions across levels:

Recommended activity: The Convenient Lie Audit (from Ch. 3) applied to a power/access case study. Identify the technology, the benefit, the hidden cost, the convenient lie, and who is harmed.


Theme C: Surveillance, Privacy, and Algorithmic Decision-Making

Draws from: Minority Report (Ch. 4), Ghost in the Shell (Ch. 7), Ex Machina (Ch. 8)

Central tension: Prediction and monitoring capabilities are growing faster than governance.

Key questions across levels:

Recommended activity: The Bias Audit (from Ch. 4) combined with the Manipulation Detection Exercise (from Ch. 8). First discover how neutral algorithms produce biased outcomes, then examine how systems designed to predict behavior shade into systems designed to control it.


Theme D: The Scientist's Responsibility

Draws from: Man in the White Suit (Ch. 10), Inferno (Ch. 11), Contact (Ch. 13), Jurassic Park (Ch. 2)

Central tension: Scientific brilliance without social awareness causes harm.

Key questions across levels:

Recommended activity: The Honest Broker Role Play (from Ch. 11) combined with the Stratton Exercise (from Ch. 10). First experience the tension between advocacy and brokering, then practice stakeholder engagement as a scientist with a genuinely beneficial invention.


Theme E: Complexity, Prediction, and Unintended Consequences

Draws from: Jurassic Park (Ch. 2), Day After Tomorrow (Ch. 12), Transcendence (Ch. 9)

Central tension: We build things more complex than we can predict.

Key questions across levels:

Recommended activity: The Assumption Stack (from Ch. 9) combined with the Resilience Audit (from Ch. 12). First calibrate confidence in predictions, then evaluate how well existing systems are built to handle the unpredictable.


Theme F: What Makes Us Human

Draws from: Never Let Me Go (Ch. 3), Ghost in the Shell (Ch. 7), Ex Machina (Ch. 8), Transcendence (Ch. 9)

Central tension: Technology is blurring the boundaries of personhood.

Key questions across levels:

Recommended activity: The Ways of Knowing Exercise (from Ch. 13) applied to personhood. Approach the question "What makes someone human?" from scientific evidence (biology, neuroscience), ethical/philosophical reasoning (rights theory, moral status), and personal/community values (lived experience, cultural tradition). Each framework yields different answers. All are needed.


Theme G: Governing the Ungovernable

Draws from: all chapters, with emphasis on Jurassic Park (Ch. 2), Ex Machina (Ch. 8), Day After Tomorrow (Ch. 12), Inferno (Ch. 11)

Central tension: Governance moves slower than technology.

Key questions across levels:

Recommended activity: Design a governance framework for a technology currently in early development. Draw on mechanisms from multiple chapters: stakeholder engagement (Ch. 10), the honest broker role (Ch. 11), resilience thinking (Ch. 12), the Collingridge dilemma (Ch. 2), and the oversight mechanisms designed for Ex Machina (Ch. 8). The exercise forces students to confront the gap between governance principles and governance practice.


PART THREE: WORKSHOP AND COURSE FORMATS


Format 1: Single-Session Workshop (90 minutes)

Best for: Executive development, professional training, conference workshops

Structure:

1. Opening hook (10 min): Show a 3-minute film clip. Ask one question. Let the room discuss in pairs for 2 minutes.

2. Framework introduction (15 min): Present one of the book's core frameworks (risk innovation, "could we / should we," the convenient lie). Connect it to the clip.

3. Case application (25 min): Small groups apply the framework to a real-world case relevant to the audience's industry. Each group gets a different case.

4. Gallery walk / report-out (15 min): Groups share key insights. Facilitator highlights patterns across groups.

5. Personal application (15 min): Individual reflection: Where does this framework apply to your work? What will you do differently?

6. Close (10 min): One takeaway per person, shared aloud.

Recommended film/framework pairings for workshops:


Format 2: Multi-Week Course Module (4-6 weeks)

Best for: Undergraduate courses in ethics, STS, technology policy, or science communication

Week 1: Why sci-fi matters + Chapter 1 frameworks (risk innovation, responsible innovation). Film: 2001: A Space Odyssey (clips). Activity: The Risk Landscape Exercise.

Week 2: Biotechnology cluster -- Jurassic Park + Never Let Me Go (could we/should we, too valuable to fail). Activity: The Assumption Chain + The Convenient Lie Audit.

Week 3: AI and surveillance cluster -- Minority Report + Ex Machina (algorithmic bias, manipulation, permissionless innovation). Activity: The Bias Audit + The Manipulation Detection Exercise.

Week 4: Enhancement and identity cluster -- Limitless + Ghost in the Shell (therapy vs. enhancement, identity, corporate ownership). Activity: The Enhancement Spectrum + The Ownership Dilemma.

Week 5: Global systems cluster -- Day After Tomorrow + Inferno (complexity, dual-use, intergenerational responsibility). Activity: The Resilience Audit + The Honest Broker Role Play.

Week 6: Synthesis -- Contact + Chapter 14 (ways of knowing, don't panic, personal manifesto). Activity: The Ways of Knowing Exercise + The Personal Technology Manifesto.

Assessment options:


Format 3: Semester-Length Course (12-14 weeks)

Best for: Full undergraduate or graduate courses

Follows the book's chapter structure with one film per week. Each week includes the film, the chapter reading, and discussion using the tiered questions.

Additional components:

Suggested weekly rhythm:


Format 4: Professional Development Series (4 sessions, 2 hours each)

Best for: Corporate teams, government agencies, non-profit leadership

Session 1: What is responsible innovation?

Films: Ch. 1 + Man in the White Suit + Jurassic Park

Frameworks: Risk innovation, could we/should we, stakeholder engagement

Activity: The Risk Landscape Exercise applied to the organization's own products/services

Takeaway: Participants identify one area where their organization's risk framework may be too narrow.

Session 2: AI, data, and algorithmic accountability

Films: Minority Report + Ex Machina

Frameworks: Algorithmic bias, permissionless innovation, Plato's Cave

Activity: The Bias Audit applied to the organization's data practices

Takeaway: Participants identify one algorithmic or data-driven process that needs review.

Session 3: Who benefits? Power, access, and corporate responsibility

Films: Elysium + Limitless + Ghost in the Shell

Frameworks: Too valuable to fail, normalization pressure, corporate ownership

Activity: The Convenient Lie Audit applied to the organization's supply chain or impact footprint

Takeaway: Participants identify one "convenient lie" in their organization and propose how to address it.

Session 4: Building resilience and navigating uncertainty

Films: Day After Tomorrow + Contact + Ch. 14

Frameworks: Four resilience types, ways of knowing, don't panic

Activity: The Resilience Audit applied to the organization + The Personal Technology Manifesto reframed as an organizational commitment

Takeaway: Participants commit to one concrete action and share it with the group.

Each session uses the executive-tier questions from the relevant chapters. Clips (3-5 minutes) substitute for full film viewings.


Connections to the Thematic Index

This guide is designed to work alongside the Thematic Index, which maps concepts, keywords, and question patterns to the most relevant files on the spoileralert.wtf website. When students or participants want to go deeper on any topic, the thematic index provides routing to the full treatment across chapters, technology pages, ethics pages, and framework pages.

Key connections:


About the Source Material

Films from the Future: The Technology and Morality of Sci-Fi Movies by Andrew Maynard was published in November 2018 by Mango Publishing. The book draws on Maynard's experience as a physicist, risk scientist, and professor at Arizona State University's School for the Future of Innovation in Society. He has taught the undergraduate course "The Moviegoer's Guide to the Future" using these films since 2017. His ongoing commentary on emerging technologies and society is available at The Future of Being Human and through the ASU Future of Being Human Initiative.

The companion website spoileralert.wtf provides AI-readable and human-accessible pages covering all the book's technologies, ethical themes, and navigational frameworks, plus an expanded film watchlist of 80+ films tagged with theme and technology connections.