Autonomous Weapons and Lethal Autonomous Systems

The debate over autonomous weapons has moved from academic conferences to active battlefields. AI-enabled drones, autonomous targeting systems, and algorithmic decision-making in military operations are no longer hypothetical — they are operational, and the governance frameworks to manage them do not yet exist.

What Has Changed Since 2018

The book touched on military applications of AI and automation in its discussions of Elysium (robotic law enforcement) and the broader themes of Could We? Should We? and Informed Consent. But the acceleration since 2018 has been dramatic.

Small autonomous drones — cheap, expendable, and capable of identifying and engaging targets without real-time human input — have been deployed in multiple conflict zones. AI systems assist in target identification, pattern-of-life analysis, and strike recommendations. The United States, China, Russia, Israel, Turkey, and others are investing heavily in autonomous military capabilities. The technology is largely software-based, which means it proliferates differently from traditional weapons — you cannot control it through material supply chains the way you can nuclear fissile material.

The central question in this space is "meaningful human control." At what point in the kill chain — the sequence from target identification to engagement — must a human being be present and making decisions? Different actors draw this line in very different places. Some argue that humans must authorize every individual strike. Others accept systems that operate autonomously within defined parameters, with humans setting the rules of engagement but not approving each action.

Why It Matters

This is one of the clearest cases where the book's could we, should we framework applies. The capability exists. The strategic incentive to deploy it is strong. And the ethical questions are profound: Can a machine make a morally acceptable decision about who lives and who dies? Does removing human judgment from lethal decisions cross a line that should not be crossed regardless of military advantage? What happens to accountability when an algorithm makes a mistake?

The pace gap — between technological deployment and governance — is stark. The UN Convention on Certain Conventional Weapons has hosted discussions on lethal autonomous weapons since 2014, with limited progress toward binding regulation. Meanwhile, the technology continues to be developed, tested, and deployed. This is the Collingridge dilemma in its most consequential form: the technology is easiest to regulate when we know least about it, and hardest to control once it is embedded in military doctrine and procurement.

The question of how to govern technologies that cross borders — explored in These technologies don't stop at borders. How do we govern them? — is perhaps nowhere more urgent than here.

Explore Further