When a machine can paint, compose music, write poetry, and design buildings, what happens to the people who used to do those things? And who owns what the machine creates? These are not philosophical thought experiments. They are live legal battles, active labor disputes, and urgent cultural questions that the book's frameworks anticipated but could not have predicted in their current form.
In 2018, AI-generated art was a novelty. The sale of an AI-generated portrait at Christie's for $432,500 that year was treated as a curiosity. By 2022, tools like DALL-E, Midjourney, and Stable Diffusion had put image generation in the hands of anyone with an internet connection. By 2025, AI can generate photorealistic images, coherent video, music in any style, and long-form text that is difficult to distinguish from human work.
The labor impact has been real and immediate. Illustrators, concept artists, voice actors, copywriters, and translators have all seen work disappear or rates collapse as AI tools replace tasks that previously required human skill and training. The 2023 Hollywood writers' and actors' strikes were partly driven by concerns about AI replacing creative labor — the first major labor action to center AI displacement.
At the heart of this development is a question that existing law was never designed to answer: who is the author of AI-generated work?
Copyright law, in most jurisdictions, requires a human author. The US Copyright Office has ruled that purely AI-generated images cannot be copyrighted. But the boundaries are blurry. A person who writes a detailed prompt, iterates through dozens of variations, and curates the result is exercising creative judgment. Where does tool use end and authorship begin?
The training data question is equally contested. Models like Stable Diffusion and GPT-4 were trained on billions of images and texts created by humans. The creators of that training data were largely not asked, not compensated, and not credited. Lawsuits — the New York Times against OpenAI, Getty Images against Stability AI, and many others — are testing whether training constitutes fair use or infringement. The outcomes will shape the economics of creative AI for decades.
But the deepest question is not legal — it is philosophical. If art is how human beings process experience and make meaning, what happens when the artifacts of art can be produced without the experience? The book's argument in The Role of Art and Culture — that science fiction films matter precisely because they are how we work through our anxieties and hopes about technology — takes on a recursive quality when the art itself is produced by the technology it is supposed to help us understand.
This is where the Man in the White Suit comes in. That film's lesson — that a brilliant invention can threaten the livelihoods and power structures of an entire industry — applies with startling precision. The tension between democratizing creative tools (anyone can now produce professional-looking imagery) and devaluing creative labor (professional illustrators are losing their incomes) is exactly the kind of could-we-should-we dilemma the book was written to illuminate.
The concentration question from Power, Privilege, and Access is also central. The companies that control the most powerful generative models control, in effect, a new means of cultural production. The question of who benefits from AI-generated creativity and who is displaced by it will be one of the defining equity questions of the coming decade. See If an AI creates something beautiful, who does it belong to?