Articles

The Moral Hazard of the AI Soldier

April 4, 2026

Robots may help protect troops. They may also make war easier to wage — and harder to answer for.

By Carlos G. Sháněl, Center for Cybersecurity Studies, Casla Institute

The argument for AI-powered soldiers is, on its face, deeply humane. Why send a young infantryman into a breach, a bunker or a contaminated zone if a machine can go instead? Why risk trauma, panic or human error when a robot does not tire, does not fear and does not flinch?

It is a compelling case, and one that should not be dismissed with easy dystopian rhetoric. Robotics already has a legitimate place in war. Militaries use unmanned systems for surveillance, explosive disposal, logistics and force protection. In those roles, machines can reduce exposure, extend reach and save lives. The question is not whether robotics belongs on the battlefield. It already does.

The harder question is what happens when support tools begin to blur into substitutes for human judgment.

That question has felt less abstract to me through my studies at the University of Colorado Boulder, where one of the issues I have been grappling with is the use of robotics in warfare. My coursework has pushed me to think more precisely about autonomy, responsibility and human control in war. I am still early in the program, early enough to know how much I do not yet know. But some tensions are already unmistakable. The most important is this: a robot that carries supplies or disarms explosives is not the same thing as a machine that identifies, selects and kills targets with minimal human oversight. Yet on the battlefield, that line is under pressure.

War does not wait for ethical clarity. It rewards whatever works.

That is why the emergence of AI-enabled warfare in Ukraine matters far beyond Ukraine. The conflict has become a proving ground for systems that automate parts of combat once reserved for humans. Drones already perform reconnaissance, strike targets and operate in environments where electronic warfare can sever communication with their operators. Under those conditions, the tactical incentive is obvious: let the machine do more on its own. The pressure is not theoretical. It is operational.

Advocates of AI soldiers see this as the natural next step. If drones can increasingly navigate, identify and attack, why not build robotic systems that move through buildings, traverse hazardous terrain, carry standard weapons and work alongside troops? Their case is not irrational. Robots do not suffer exhaustion or terror. They can enter spaces too dangerous for people. They can be deployed in chemical or biological conditions that would incapacitate human beings. In a military profession organized around reducing friendly casualties, those advantages will be hard for any government to ignore.

This is where the debate becomes more difficult than slogans allow. There are real military and humanitarian arguments for some forms of battlefield robotics. A serious critique has to acknowledge that. It cannot rest on the pretense that a human soldier is always more restrained, more accurate or more ethical than a machine. Human beings panic. Human beings commit atrocities. Human beings misidentify civilians. Anyone writing honestly about war has to start there.

But conceding those truths does not resolve the core problem. It sharpens it.

The deepest danger posed by AI-powered soldiers is not simply that they may malfunction, though they will. It is that they may alter the political and moral meaning of war. If a state can project force while exposing fewer of its own citizens to danger, the domestic cost of military action falls. And when the cost falls, the threshold for using force can fall with it.

For centuries, one of war’s few restraints has been its ability to impose visible sacrifice on the societies that wage it. Coffins returning home do not guarantee wisdom, but they can impose accountability. They remind the public that war is not a concept, or a briefing, or a clean line on a map. It is loss. A government that can promise fewer body bags through robotic force may find it easier to sustain operations, expand missions or accept risks that would once have seemed politically intolerable.

That is not a futuristic concern. It is one of the oldest temptations in statecraft: to make coercion feel cheap.

There is another problem, equally serious and perhaps even more elusive. Responsibility in war is already diffuse enough. When an AI-enabled system kills the wrong person, strikes a civilian or escalates in ways its operators did not anticipate, who answers for it? The commander who deployed it? The contractor who built it? The engineer who trained the model? The official who approved the procurement? The operator who watched but could not intervene in time?

Modern warfare already distributes action across long chains of command and technology. AI threatens to make those chains so opaque that accountability survives mostly as a talking point. Everyone is involved in theory; no one is responsible in practice.

That is why the ethical center of this debate remains autonomy. Not because every autonomous function is equally dangerous, but because the delegation of lethal judgment is qualitatively different from the automation of support tasks. A machine can detect motion, map terrain, carry ammunition and even improve defensive response times. But once it is permitted to make or effectively determine life-and-death decisions, we move from assistance to abdication.

This distinction is easy to blur because the language surrounding military AI is often designed to blur it. Terms like “human-on-the-loop” and “meaningful human control” can sound reassuring while masking how quickly meaningful oversight disappears in real combat conditions. Jamming, speed, distance and confusion all push decision-making toward the machine. The system may be described as supervised, yet the human role may amount to little more than authorizing a process whose inner logic no one can fully audit in real time.

A robot can carry a rifle. It cannot bear moral responsibility.

That is why I am skeptical of the growing confidence with which some technologists and defense entrepreneurs describe AI soldiers as an ethical advance. They may, in some contexts, reduce risk to friendly forces. They may be useful in rescue, logistics, breaching, reconnaissance and operations in contaminated or inaccessible environments. Democracies should not reject those uses out of reflex. They should invest in technologies that protect troops, improve defensive capabilities and reduce unnecessary exposure.

But they should also resist a more seductive claim: that because machines can do some parts of war better than humans, they should be entrusted with the gravest parts of war as well.

That is not a technical upgrade. It is a moral shift.

The challenge for democratic societies is to draw firmer lines while they still can. Robotics should serve clearly bounded human purposes, not dissolve them. Systems must be traceable, interruptible, governable and subject to real accountability. The burden should not be on critics to prove that near-autonomous killing is dangerous. The burden should be on its advocates to explain why any democracy should delegate lethal judgment to systems whose speed and complexity are already outpacing meaningful public oversight.

None of this means other powers will stop. Russia and China will not abandon military AI because Western democracies discover moral restraint. Nor will the commercial race to secure defense contracts slow down on its own. But competition is not an argument for thoughtlessness. It is a test of whether democratic societies can innovate without surrendering the principles they claim to defend.

I do not doubt that robotics will play an expanding role in warfare. Some of that expansion is inevitable. Some of it may even be desirable. But the arrival of AI soldiers should not be mistaken for progress simply because it is technologically impressive. A battlefield populated by machines may protect some troops while exposing everyone else — civilians, institutions, democratic accountability itself — to new forms of risk.

War is not only a contest of capability. It is a test of judgment, restraint and responsibility. The more eagerly we hand those burdens to machines, the more likely we are to preserve the efficiency of war while eroding what remains of its limits.

That would not be a triumph of innovation.

It would be a failure of nerve.