Pentagon: A Human Will Always Decide When a Robot Kills You

First Reaper The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much. Twitter.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.

The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.

It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.

It’s that sort of thing that worries Human Rights Watch, for instance. Last week, the organization, among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”

Laudable impulse, wrong solution, writes Matthew Waxman. A former Defense Department official for detainee policy, Waxman and co-author Kenneth Anderson observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”