## "A few companies control the most powerful AI on Earth. Should I be worried?" OpenAI, Anthropic, Google DeepMind, Meta, and a small number of other organizations control the frontier of artificial intelligence. They decide what models are built, what safety measures are implemented, what data is used, and who gets access. This is an extraordinary concentration of a transformative capability — and it has happened largely without public deliberation. ### Why This Question Is Hard The concentration is partly a result of economics. Training frontier AI models requires billions of dollars in computing infrastructure, vast datasets, and deep technical expertise. These barriers to entry naturally consolidate the field. It is not a conspiracy — it is what happens when a technology requires enormous resources to develop. But the consequences of that concentration are significant. The companies building frontier AI are making decisions that will shape economies, labor markets, information ecosystems, and potentially the nature of intelligence itself. These decisions are made by small leadership teams, informed by their own values, incentives, and competitive pressures. The public — the people who will live with the consequences — has essentially no voice. The comparison to other concentrated technologies is instructive but imperfect. Nuclear technology was concentrated by governments through deliberate policy choices, driven by the technology's destructive potential. The early internet was concentrated in a few institutions but rapidly decentralized as costs dropped. AI is following neither path cleanly. Costs are high enough to keep frontier development concentrated, but the models themselves, once trained, can be distributed widely (as Meta has done with its LLaMA models). The picture is one of concentrated development and potentially distributed deployment — a combination that creates its own governance challenges. ### What the Book Brings to This The [Man in the White Suit](/md-files/movies_man_in_the_white_suit.md) is the book's most direct treatment of what happens when innovation threatens existing power structures — and who gets to decide whether an innovation sees the light of day. In that film, both factory owners and workers conspire to suppress a brilliant invention because it threatens their interests. The parallel to AI is not exact — nobody is suppressing frontier AI — but the underlying question is the same: who controls transformative technology, and in whose interest? The book's [Permissionless Innovation](/md-files/rei_permissionless_innovation.md) framework applies in a specific way here. The AI labs have, to a significant degree, innovated without permission — developing and deploying systems with profound societal implications without waiting for regulatory frameworks or public consensus. The [Corporate Responsibility](/md-files/rei_corporate_responsibility.md) theme asks what obligations come with that power. [Power, Privilege, and Access](/md-files/rei_power_privilege_access.md) cuts in multiple directions. Concentration of AI capability in a few companies also means concentration in a few countries — primarily the US and China. What does this mean for the rest of the world? The book's concern with who benefits and who is left behind extends from individuals to nations. Whether you should be worried depends on what you think concentrated power requires: trust in the institutions that hold it, accountability mechanisms that constrain it, or both. Right now, the accountability mechanisms are thin. ### Explore Further - [LLMs, Frontier AI, and Agentic Systems](/md-files/p18_llms_frontier_ai.md) — what these companies are building - [The AGI Debate](/md-files/p18_agi_debate.md) — the stakes of the race - [Permissionless Innovation and Technological Hubris](/md-files/rei_permissionless_innovation.md) — innovating without asking - [Corporate Responsibility and the Profit Motive](/md-files/rei_corporate_responsibility.md) — the obligations of power - [Power, Privilege, and Access](/md-files/rei_power_privilege_access.md) — who benefits from concentrated AI capability - [Can we regulate AI without killing the good parts?](/md-files/ceq_regulating_ai.md) — the governance question