Large Language Models, Frontier AI, and Agentic Systems

If you have used ChatGPT, Claude, or Gemini to draft an email, summarize a document, or help debug code, you have already interacted with the most consequential technology development since Films from the Future was published. Large language models — and the agentic systems being built on top of them — represent a step change in what artificial intelligence can do, and they have arrived faster and with more disruptive force than almost anyone predicted.

What Has Changed Since 2018

When the book was written, AI was already a central theme. The chapters on Ex Machina and Transcendence explored machine intelligence, and the Artificial Intelligence topic page covered the state of deep learning and neural networks. But in 2018, AI was still primarily a tool for pattern recognition — impressive at image classification and game playing, limited at anything resembling open-ended reasoning or language.

That changed rapidly. OpenAI's GPT-2 in 2019 demonstrated that scaling up language models produced emergent capabilities nobody had explicitly programmed. GPT-3 in 2020 made those capabilities commercially accessible. By 2022, ChatGPT brought them to a hundred million users in two months. Google, Anthropic, Meta, and others followed with their own frontier models — the term used for the most capable systems at any given moment, trained at enormous cost and exhibiting capabilities that are not fully understood even by their creators.

The shift from chatbots to agentic AI represents the current frontier. These are systems that do not just respond to prompts but can reason through multi-step tasks, use tools, write and execute code, browse the web, and coordinate with other AI agents. Claude Code, Devin, and similar tools can take a loosely defined task and work through it with a degree of autonomy that would have been science fiction in 2018. Multi-agent systems — where specialized AI agents collaborate, delegate, and check each other's work — are moving from research papers to production use.

Why It Matters

Three dimensions of this development are particularly significant.

Education is being disrupted in real time. Students use LLMs for homework, research, and essay writing. Teachers face a fundamental question: if AI can produce competent work on demand, what is education actually for? This is not a question about cheating — it is a question about what skills matter when knowledge production is increasingly automated. For more on this, see AI is changing how my kids learn and how I teach. Is that OK?

Copyright and intellectual property are in upheaval. These models are trained on vast amounts of human-created text, images, and code. The legal question of whether that training constitutes fair use is working its way through courts globally. But the deeper question is philosophical: what does intellectual property mean when a machine can produce in seconds what took a human months? Existing IP frameworks assume a human author. That assumption is breaking. See If an AI creates something beautiful, who does it belong to?

Concentration of power is accelerating. Training frontier models costs hundreds of millions of dollars and requires computing infrastructure that only a handful of organizations can afford. This creates a concentration of capability — and of influence over what AI can and cannot do — that the book's thinking on Permissionless Innovation and Corporate Responsibility anticipated but could not have foreseen at this scale. See A few companies control the most powerful AI on Earth. Should I be worried?

How the Book's Frameworks Apply

The book's treatment of Hype vs. Reality is essential here. LLMs are genuinely transformative, but they are also surrounded by breathtaking hype. The discipline of counting assumptions — how many untested leaps are required to get from "impressive language model" to "artificial general intelligence"? — is exactly the tool the book provides. The AGI debate makes this tension explicit.

The book's emphasis on who benefits and who is left behind applies with particular force. LLMs are amplifiers: they amplify the productivity of people who know how to use them and widen the gap for those who do not. The book's Elysium chapter — about technology creating a two-tier society — has become more relevant, not less.

And the could we, should we question, the book's central thread, has never been more urgent. These systems were developed and deployed largely without public deliberation. The question of whether that was wise is explored in Why does it feel like nobody asked me about any of this?

Explore Further