Artificial Intelligence

Few technologies generate as much excitement, anxiety, and confusion as artificial intelligence. The term conjures images of sentient robots and all-knowing computers, but the reality of AI, both its capabilities and its limitations, is far more nuanced than popular culture suggests. Understanding what AI actually is, and what it is not, is essential to thinking clearly about the future it is helping to create.

What Is Artificial Intelligence?

Artificial intelligence is a broad field of computer science focused on creating systems that can perform tasks that typically require human intelligence. These tasks include recognizing patterns in data, understanding language, making decisions, and learning from experience.

Most of the AI systems in use today fall under the category of "narrow AI," meaning they are designed to do one specific thing very well. A system that can beat the world champion at Go, or identify tumors in medical images, or translate text between languages, is impressive within its domain but has no understanding of the world outside it. It does not "know" what it is doing in any meaningful sense. It is a sophisticated pattern-matching tool, trained on vast quantities of data.

This is a far cry from "general AI," a hypothetical system that could match or exceed human intelligence across all domains. General AI remains a distant and uncertain prospect, despite frequent claims to the contrary. The gap between a system that can generate plausible text and one that genuinely understands what it is saying is enormous, and it is not clear that current approaches to AI will ever bridge it.

How the Book Explores It

Films from the Future explores AI across several chapters, but the most focused treatment comes through Ex Machina (Chapter 8). The film tells the story of Ava, an AI housed in a humanoid body, and the two men who interact with her: Nathan, the tech genius who built her, and Caleb, the young programmer brought in to assess whether she is truly conscious.

The book uses Ex Machina to peel back layers of what AI actually involves and what the film gets right and wrong. It draws on Plato's allegory of the cave to explore how our own cognitive limitations shape our understanding of machine intelligence. The film's power lies not in its depiction of AI technology but in how it reveals the ways human psychology, including our tendency toward hubris, wishful thinking, and projection, distorts our relationship with the machines we create.

The book also discusses AI in the context of Minority Report (Chapter 4), where predictive algorithms are used to forecast criminal behavior, and Transcendence (Chapter 9), which imagines AI taken to its theoretical extreme.

Where Things Stand Today

AI has advanced enormously in recent years. Large language models can generate remarkably fluent text. Image-generation systems can produce photorealistic pictures from text descriptions. AI systems are being deployed in healthcare, finance, transportation, law enforcement, and nearly every other sector. The pace of development has been startling, even to many researchers in the field.

Yet the fundamental limitations of current AI remain. These systems do not understand context in the way humans do. They can produce confident-sounding nonsense. They inherit and amplify the biases present in their training data. And they are only as good as the data and objectives they are given, which means the humans who design and deploy them bear enormous responsibility for the outcomes.

Why It Matters

AI matters because it is already reshaping how decisions are made, who benefits from technology, and what possibilities are open to us. The decisions embedded in AI systems, about what to optimize for, whose data to use, and how to handle uncertainty, are fundamentally human decisions with social consequences. An AI that recommends prison sentences, screens job applicants, or determines what news you see is not a neutral tool. It reflects the priorities and blind spots of the people who built it.

The book emphasizes that our greatest risk with AI may not be the emergence of superintelligence, but the much more mundane danger of deploying powerful systems without adequate thought about their impacts. AI does not need to be conscious to cause harm. It just needs to be poorly designed, carelessly deployed, or used by people who do not fully understand its limitations.

Getting AI right requires more than technical expertise. It requires input from ethicists, social scientists, affected communities, and the broader public. The technology is too consequential to be left to engineers alone.

Explore Further

Further Reading