Support First Things by turning your adblocker off or by making a  donation. Thanks!

Writing in the Wall Street Journal last week, Robert H. Latiff, a retired Major General in the United States Army now teaching at Notre Dame University, and Patrick J. McCloskey, who teaches at Loyola University in Chicago, take up the troubling question of military drones that, in the near future, will be able to deploy lethal force without direct human control. While acknowledging certain benefits of “emerging robotic armies” (e.g., fewer human casualties for the side deploying the drones), Latiff and McCloskey think that the issues involved are of tremendous moral importance:

The problem is that robotic weapons eventually will make kill decisions with no more than a veneer of human control. Full lethal autonomy is no mere step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any other object.

When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster”which at least can be used for spare parts . . . .

Lethal autonomy also has grave implications for democratic society. The rule of law and human rights depend on an institutional and cultural cherishing of every individual regardless of utilitarian benefit. The 20th century became a graveyard for nihilistic ideologies that treated citizens as human fuel and fodder . . . . Surely death by algorithm is the ultimate indignity.

This makes it sound as if using fully autonomous military drones would be necessarily immoral and, indeed, the moral equivalent of Nazism or Stalinism. While I have great respect for both authors and especially for Gen. Latiff’s service to our country, I think this is confused.

Let’s start at the beginning. A military drone that “make[s] kill decisions” in fact makes no decisions at all, properly speaking. It is a machine, not a moral agent, and no one will hold the drone responsible morally or legally for what it does. On the contrary, if certain human beings build and use military drones that “make their own kill decisions,” these human beings will be morally and legally responsible for the results. This is no surprising conclusion but merely the application of the familiar principle that people are responsible for the reasonably foreseeable consequences of their actions.

Further, it is not true that being killed by a fully autonomous military drone necessarily violates the dignity of the human person. What violates the dignity of the human person is being treated unjustly . Regardless of the physical manner of death, a person who is killed unjustly has his dignity violated; a person who is killed justly does not.

Thus, if a person killed by an autonomous military drone could have been justly killed directly by the hand of a human being, the person killed has not been treated unjustly and his dignity has not been violated merely because the proximate cause of his death was a computer program running on a silicon chip in a drone. In my view, there is no special moral issue involved in the use of fully autonomous military drones.

All that said, I think I see what Latiff and McCloskey are getting at. They are worried that, by setting fully autonomous military drones loose upon the earth, we are valuing too cheaply the human lives that may be taken, primarily those taken by mistake. They worry, for example, that “it is far from clear whether robots can be programmed to distinguish between large children and small adults, and in general between combatants and civilians, especially in urban conflicts.”

This is a serious concern. Although many machines pose dangers to human life (think of automobiles), military drones are designed to kill human beings and so are especially dangerous. Fully autonomous military drones could be even more dangerous because one check on mistaken killing”the human controller”will have been removed. Such machines would thus be extraordinarily dangerous, and Latiff and McCloskey are right to worry about their use.

They are wrong, however, in thinking that there is some foundational moral issue in play. Although the machines involved are extraordinarily dangerous, the moral principle governing their use is perfectly ordinary: It is the familiar one that human beings should engage in an activity that poses dangers to others only if, in the totality of the circumstances, doing so is reasonable”i.e., if the good to be achieved, taking account of the probability of success, is proportionate to the possible ill effects. Whether using fully autonomous military drones is morally permissible will thus turn on the facts of the particular case”e.g., how effective such drones are in killing the enemy, how accurately they distinguish between combatants and non-combatants, and so on.

Consider this parallel. When human beings drive automobiles, they create dangers for innocent third parties, although we think that these dangers are usually justified. But suppose that a company”say Google”develops cars that are driven not by human beings but by computers. (Google is actually working on this, with significant success.) Suppose further that it turns out that computers can drive cars more safely than human beings can (which is also likely to be the case someday). So if we all switch to computer-driven cars, the total accident rate will go down, and fewer people will be killed in car crashes.

Of course, some accidents will still happen, and some people will still be killed. Would anyone really say that those killed by computer-driven cars have been harmed more than those killed by the mistakes of human drivers? I don’t see any plausible basis for such a position.

And something analogous may turn out to be true about fully autonomous military drones. As is well known, human soldiers make mistakes distinguishing between combatants and non-combatants and thus sometimes kill the innocent. Fully autonomous military drones will make such mistakes too. The question is which set of mistakes is worse”that is, who will have the higher error rate, the human beings or the drones.

The answer to that could well vary with the context. In any event, it is an empirical question, an empirical question the resolution of which will figure in the moral determination of whether our using such extraordinarily dangerous machines is morally justified in particular circumstances. That will be a very difficult question, but it will be difficult because it will turn on facts and circumstances that we can know only imperfectly, not because it involves foundational issues in moral philosophy.

Robert T. Miller is professor of law and F. Arnold Daum Fellow in corporate law at the University of Iowa.

Become a fan of First Things on Facebook , subscribe to First Things via RSS , and follow First Things on Twitter .


Comments are visible to subscribers only. Log in or subscribe to join the conversation.

Tags

Loading...

Filter Web Exclusive Articles

Related Articles