Should We Fear an AI Arms Race?
Five reasons the benefits of defense-related artificial intelligence research outweigh the risks—for now.
This past summer, many of the titans of technology, including Stephen Hawking, Elon Musk, and Steve Wozniak, signed onto a letter calling for a ban on the application of artificial intelligence (AI) to advanced weapons systems.
The call for a ban is directly at odds with the Pentagon’s plans for future warfare, which include an increased emphasis on AI and unmanned systems, especially in cyberspace and where communications are slow such as undersea. Deputy Defense Secretary Robert Work, has said “We believe strongly that humans should be the only ones to decide when to use lethal force. But when you’re under attack, especially at machine speeds, we want to have a machine that can protect us.”
Unlike previous autonomous weapons, such as landmines, which were indiscriminate in their targeting, smart AI weapons might limit the potential for deaths of soldiers and civilians alike. The letter conveys an appreciation of the benefits and risks. “Replacing human soldiers by machines is good by reducing casualties for the owner,” the authors write, “but bad by thereby lowering the threshold for going to battle.” But is a ban really the best option?
A ban is the most extreme form of regulation, and one that should be resorted to only if the answer to both of the following questions is yes: Question 1: Are the risks greater than the benefits for all other options? Question 2: Can a ban significantly reduce the risks?
At the other extreme, technology should only be unregulated if the answer to at least one of the following questions is yes: Question 3: Are the risks negligible? Question 4: Would all regulatory options reduce benefits by more than the risks?
On the rewards side of AI weapons, increased precision is expected to reduce collateral damage, increased speed may stop some attacks before they happen, and autonomy should remove soldiers from the battlefield. On the risks side, many specific scenarios have been envisioned but most of them can be grouped into the following five categories: control, hacking, targeting, mistakes, and liability.
Control: Can AI be controlled? Or will it eventually control us? This is the familiar story from many Hollywood blockbusters where machines with superior intellect have goals and desires that conflict with those of their creators. Think HAL or Ultron. The odds of this happening in the near term are low enough to argue that the risks are negligible.
Hacking: Are AI systems more vulnerable to hacking? Not necessarily. Modern electronic systems, including weapons, are fraught with vulnerabilities. AI may be exploitable as well, but it’s difficult to write malware for a machine that configured its own algorithm. The benefits outweigh the risks, but those risks can and should be reduced.
Targeting: Should a human always make the final decision? In most cases there is little benefit to removing the human. An exception is when an AI can minimize damages by acting quickly such as preventing an attack or buying time to decide whether the assailant is reaching for a gun or a cell phone. Standards could be established that specify the required certainty and the specific scenarios when an AI would be allowed to proceed without human intervention. It may also be that an AI equipped with only non-lethal weapons can achieve nearly all the benefits with sufficiently reduced risk.
Mistakes: Would AI weapons make mistakes? Probably, but humans certainly will. Well designed and tested machines are almost always more reliable than humans. AI weapons systems can be held to strict standards for design and testing. Indeed, this should be a priority in the development of AI systems.
AI systems may make mistakes differently though. Human mistakes are usually isolated incidents but many separate AI weapons could make the same mistake at the same time. For instance, if the AI mistakenly identifies a brand of fireworks as a violent explosive device then during certain holidays lots of innocent people could be put at risk before the mistake is identified and corrected.
Mistakes are made on a daily basis by humans, and regulation such as standards for testing and requiring variability in AI initiation and training to avoid simultaneous errors can help reduce the risks.
Liability: Assuming there will be mistakes, the AI itself will not be liable. So who is? If the autonomous vehicles industry is any indication, companies designing AI may be willing to accept liability but their motivations may not align perfectly with those of society as a whole. It may be safer to assign responsibility to the warfighter in command of the weapons, or perhaps to assign financial responsibility to the company and personal responsibility to the warfighter.
This list does not cover all the possible concerns or benefits, and the categories should not be considered equal in terms of their potential impact on the debate. Reasonable people can disagree over the sizes and likelihoods of both the benefits and the risks, but it is difficult to justify a ban the most extreme regulatory option at this stage. It is also hard to justify the opposite extreme, foregoing protective regulations.
The goal of regulation is to maximize benefits while minimizing risk and there are strong arguments that, for now, AI weapons can be made a net positive. That’s not to say AI weapons are without risks, just that the benefits are too substantial and that those risks can be mitigated with more moderate regulation than a ban.
The titans of the tech world deserve thanks for warning about the potential risks of military applications of artificial intelligence. Even if a ban is not in the cards their help will be needed more now than ever on both the technological and regulatory fronts to tilt the balance of AI weapons as far as possible in humanity’s favor.
NEXT STORY: The Rise of ISIS in Southeast Asia