Keeping Killer Robots on a Tight Leash
As militaries contemplate autonomous weapons technology, they must anticipate and plan for its consequences.
This week, delegates to the United Nations Convention on Certain Conventional Weapons will discuss autonomous weapon systems, or what activists call “killer robots.” Colorful language aside, the incorporation of increasing autonomy into weapons raises important legal, policy, and ethical issues. These include potential motivations for developing autonomous weapons, how they might proliferate, implications for crisis stability, and what their possible development means for the military profession.
No government has publicly stated that it is building autonomous weapons, but there are several reasons why they might start. The need for speed has already led at least 30 countries to deploy defensive systems with human-supervised autonomous modes, such as Aegis and Patriot, to protect ships, bases, and civilian populations from swift swarms of aircraft and missiles. Such systems are only likely to become more important as precision-guided missiles proliferate. Autonomous weapons could also be useful in situations where radio links work badly or not at all. In a conflict, militaries will seek to jam or disrupt each other’s communications. Moreover, some environments, such as undersea, are intrinsically challenging for communications. Finally, some governments could desire autonomous weapons, in part, simply because they believe potential adversaries might obtain them.
Given the military and political attractiveness of autonomous weapons, it behooves us to explore some of the potential problems they present. Even if they performed better than humans most of the time, they would still fail in some circumstances — and in different and perhaps unexpected ways. Autonomous systems do well when the environment is predictable and there is an objectively correct action, like landing a plane safely on an aircraft carrier. In other situations, they can be “brittle.” What if self-driving cars could reduce auto fatalities by 90%, but the remaining 10% of deaths could have been easily prevented by a human driver? How should we think about those types of situations?
Failures can stem from simple programming errors, human operators using autonomous systems incorrectly, or the system’s interaction with uncertain and unpredictable environments. This last problem is particularly acute when multiple autonomous systems interact at high speeds. On May 6, 2010, an automated stock trade interacted with high-frequency trading algorithms to produce a “flash crash” in which the Dow Jones lost nearly 10% of its value in a matter of minutes.
(Related: Why There Will Be A Robot Uprising)
Similarly, autonomous weapons could perform perfectly 99.99% of the time but, in the few instances where they did fail, fail quite badly. In 2003, the U.S. Patriot air defense system shot down two friendly aircraft, killing the pilots. U.S. operators had physical access to the system and could disable it to prevent additional fratricides. However, this outcome would not necessarily have been the case if the operators had not had physical access to the malfunctioning weapon. Without a human “in the loop,” an autonomous system that was malfunctioning – or hacked by an enemy – could keep engaging targets until it ran out of ammunition or was destroyed. Software “kill switches” could help maintain human control over such systems, but only if communications links remained functional and the system still responded to software commands.
In such a scenario, a system failure – caused either by a malfunction, unanticipated interaction with the environment, or a cyber attack – raises the frightening prospect of mass fratricide on the battlefield. Similar to the 2010 flash crash, a host of autonomous systems malfunctioning could, in theory, rapidly spiral out of control. In the worst case, the result could be fratricides, civilian casualties, or unintended escalation in a crisis – potentially even a “flash war.”
The risks of such an outcome make cyber security even more important than it is today. While virtually any modern weapon system is theoretically vulnerable to cyber attacks, the consequences of hacking into an autonomous system could be far greater, since an enemy could actually take control of the system. While a cyber vulnerability could ground a modern fighter aircraft, an adversary would be hard-pressed to take control of the aircraft with a pilot in the cockpit. In contrast, an adversary could theoretically take control of an unmanned vehicle. With today’s remotely piloted systems, an adversary would have to replicate the controls in order to operate it. As a system’s autonomy increases, however, an adversary would only need to alter higher-level command guidance and let the vehicle – or autonomous weapon – carry out the actions on its own.
The possibility of failures from spoofing, hacking, malfunctions, and unintended interaction with the environment can be reduced with better cyber security and pre-deployment testing. But testers can only harden a system against known risks. There will always be unanticipated situations that an autonomous system will encounter, particularly when an enemy is trying to hack, spoof, jam, or otherwise deceive a system. Some failures will always occur. The greater challenge is ensuring that when systems fail, they fail safe.
The financial market’s response to the 2010 “flash crash” points the way toward a potential solution. After the incident, regulators imposed “circuit breakers” to halt trading if stocks rapidly plunged. Similarly, “human circuit breakers” to halt an autonomous system’s actions if it begins to fail and “human firewalls” to mitigate against hacking and spoofing attacks could help ensure that when systems fail, they fail safely.
Even if these problems could be adequately addressed, autonomous weapon systems raise challenging issues for the military profession. In an autonomous weapon, the decision about which specific targets to engage is no longer made by the military professional, but by the system itself, albeit in accordance with programming written by people. No longer can a warfighter be said to be responsible for each target engaged. Rather, the warfighter is responsible for placing the autonomous weapon into operation, but it is the engineers and programmers who designed the system who are responsible for target selection.
If an autonomous weapon did something unexpected, human operators could justifiably claim, in some cases, that it wasn’t consistent with their intentions and it wasn’t their fault. Advocates for a ban on autonomous weapons worry about an “accountability gap,” but the problem is greater than simply holding someone accountable after an incident. It is at least theoretically possible to design regimes to assign accountability after the fact. The challenge is that accountability might lie with the engineer or programmer, not the warfighter, which cuts to the core of what the military profession is – expertise in decisions about the use of force.
Many innovations have changed how combatants fight on the battlefield, from the horse to the crossbow to firearms and missiles, but none of them changed the essential fact that it was still combatants deciding when and how to use force. A warfighter would still have the decision whether to deploy an autonomous weapon, but that is a qualitatively different decision than authorizing specific targets. Rather than being like a driver of a high-end automobile today who benefits from autonomous driving aids, such as intelligent cruise control and automatic lane keeping, but is still in control of the vehicle, warfighters operating autonomous weapons would be more like the passenger in Google’s steering wheel-less self-driving car: in charge of the decision whether to get in the car but, once onboard, along for the ride.
Fortunately, autonomy is not an either/or proposition. Militaries don’t face the choice of either building fully autonomous weapons or keeping humans fully in control (and modern sensing technologies already mean warfighters rely heavily on machines for many tasks). Instead, militaries should take a page from the field of advanced chess, or “centaur chess,” where human-machine hybrid teams that harness the best advantages of each are more likely to win than humans or machines alone. In decisions about the use of force, some mix of human and machine decision-making is likely optimal. Humans are far from perfect, and autonomous systems can help increase effectiveness and accuracy, mitigate against accidents, and even prevent some deliberate war crimes. At the same time, human judgement provides resiliency against unanticipated situations that are outside the bounds of an autonomous system’s programming. Given current technological developments, a blended approach that uses autonomy for some tasks and keeps humans “in the loop” for others is likely to be the best approach on the battlefield.