The Pursuit of AI Is More Than an Arms Race
Dealing wisely with the challenges of artificial intelligence requires reframing the current debates.
Are the U.S., China, and Russia recklessly undertaking an “AI arms race”? Clearly, there is military competition among these great powers to advance a range of applications of robotics, artificial intelligence, and autonomous systems.
So far, the U.S. has been leading the way. AI and autonomy are crucial to the Pentagon’s Third Offset strategy. Its Algorithmic Warfare Cross-Functional Team, Project Maven, has become a “pathfinder” for this endeavor and has started to deploy algorithms in the fight against ISIS. The Department of Defense also plans to create a “Joint Artificial Intelligence Center,” which could consolidate DoD AI initiatives.
At the same time, the Chinese People’s Liberation Army is prioritizing military innovation at the highest levels, pursuing a range of defense applications of AI, including swarm intelligence and decision support systems for submarines. The Russian military, meanwhile, is redoubling its efforts, seeking a range of capabilities from smarter missiles to enhanced electronic warfare.
These great powers are hardly alone; Israel, India, Japan, Korea, France, Australia, and the United Kingdom, and others are also exploring the potential of such new capabilities.
The prospect of AI-infused warfare is also drawing warnings and protest. There are calls to stop “killer robots” and fears of “weaponized AI.” In March, leading AI researchers called for a boycott of a Korean university and its defense industry partner, criticizing their efforts to “accelerate the arms race” for autonomous weapons. This month, Google employees urged their leaders to cease the company’s work on Project Maven and commit never to “build warfare technology.” Repeatedly, some of the world’s best-known technologists have warned of the consequences of an “AI arms race.”
However, the concept of an “arms race” is too simplistic a way to think of the coming AI revolution. To confront its challenges wisely requires reframing the current debates.
First and foremost, AI is not a weapon, nor is “artificial intelligence” a single technology but rather a catch-all concept alluding to a range of techniques with varied applications in enabling new capabilities. Just in the near term, the utility of AI in defense may include the introduction of machine learning to cyber security and operations, new techniques for cognitive electronic warfare, and the application of computer vision to analyze video and imagery (as in Project Maven), as well as enhanced logistics, predictive maintenance, and more.
Despite the active research and development underway, these technologies remain nascent and brittle enough that “fully autonomous” weapons (or even cars) are hardly imminent. Moreover, militaries – even those that care less about laws and ethics – may be unwilling to relinquish human control due to the risks.
The concept of an “arms race” also doesn’t capture the multifaceted implications of the AI revolution. In the aggregate, AI is often characterized as the new electricity or analogous to the steam engine. Indeed, AI could catalyze transformation in just about every aspect of our societies, economies, and militaries. In many cases, it might affect national and global security with much more far-reaching applications than the “weaponization” of AI alone. For instance, automation’s ongoing disruption of employment could ultimately prove highly destabilizing, perhaps even provoking a neo-Luddite backlash. In this and other ways, unevenness in the benefits of AI could exacerbate economic inequality within and among nations.
At the same time, AI should be recognized as a strategic technology with implications for national competiveness that extend well beyond the military domain. The U.S. and Chinese economies are on track to benefit the most from AI; China, in particular, may be poised to leverage it to accelerate growth. As China pursues plans and policies to “lead” in AI, including new educational programming, the U.S. has not yet advanced its own strategy. For any nation seeking competitive advantage, it will be critical, at a minimum, to prioritize the cultivation and recruitment of AI talent and to provide funding for long-term basic research in this field.
Beyond this reality of competition, it is important to recognize the robust, extensive cooperation in AI among researchers and enterprises. In today’s complex, globalized world, free flows of ideas, talent, and knowledge are vital to advancing scientific progress and lasting innovation. Such engagement, including between U.S. and Chinese innovation ecosystems, can be mutually beneficial, but may in some cases merit scrutiny due to the “dual-use dilemma” inherent in this technology, which China has prioritized within a national strategy for “military-civil fusion.”
This open, collaborative character of AI development – for which the private sector has acted as the primary engine of innovation – also renders infeasible most attempts to ban or constrain its diffusion. For that reason, traditional paradigms of arms control are unlikely to be effective if applied to this so-called arms race.
As of April 13, a total of 26 states have called for a “ban” on “fully autonomous” weapons, while reaffirming the importance of “human control.” However, there are reasons for skepticism or even outright pessimism about the prospects for a ban, even if states can agree upon a consensus definition of what, precisely, is to be banned, or even the meaning of human control.
In many respects, this particular Pandora’s box is already open, so calls for absolute bans may prove too little, too late. Increasingly, states and even non-state actors are using commercial, off-the-shelf technologies to enable new military capabilities; ISIS uses cheap commercial drones to gather intelligence and provide close air support. The rapid advances in AI technologies continue, and new products and available algorithms can enable the autonomy and thus scalability of these capabilities.
It seems unlikely that any major military would be willing to tie its hands or constrain its pursuit of technologies and capabilities that are so strategic and evolving so rapidly. Beyond the lack of trust, attempts to enable verification of compliance with any future agreements would also be challenging at best.
Pragmatic Ways to Reduce Risk
So we must also look for pragmatic approaches to mitigate the risks that may arise as major militaries seek to employ AI. Despite the realities of great-power rivalry, there may still be opportunities to advance engagement and cooperation on these issues.
The United Nations Group of Government Experts (UN GGE) on lethal autonomous weapons systems (LAWS) convened again this past week. This initiative is enabling vital discussions of core concepts and questions, particularly ethical issues and human control, and hopefully can create a critical foundation for future engagement.
However, there are lessons to be learned from the fact that legal and normative frameworks have yet to take hold in cyberspace (and indeed are violated routinely and with utter impunity), given the lack of shared interests and mechanisms for accountability. It is worth recalling a similar line of effort undertaken by a UN GGE on information security – which achieved initial consensus and substantive progress, including the principle that international law applies to cyberspace – has since failed, due in part to major divergences in the perspectives and preferences of great powers involved in the process. This latest UN GGE will likely struggle with similar challenges.
What are other, realistic options for a way forward? Today, while the U.S., China, and Russia tend to perceive each other as rivals, and even potential adversaries, these great powers still share a basic commitment to strategic stability. Since AI technologies remain brittle and often vulnerable to hacking or spoofing, their operationalization for military purposes may enable new capabilities but also yield new vulnerabilities that could result in accidents or miscalculations, even at worst contributing to inadvertent escalation.
At a time when trust and shared interests remain lacking among great powers, there may still be opportunities to engage cooperatively on challenges of mutual concern, such as questions of AI safety and operational risk. By way of analogy, during the Cold War, the U.S. developed permissive action links to secure and enhance control over nuclear weapons in order to prevent their unauthorized arming or launch, later openly sharing that technology to improve overall levels of safety. (There have been more recent arguments that the U.S. should consider that precedent as a basis for technical cooperation with China on cyber issues, perhaps even to include sharing attribution techniques.)
Such precedents seem salient at a time when concerns are growing about the impact of AI on nuclear and strategic stability.
Even at the height of the Cold War, the two sides talked about shared concerns and aversions. Today, despite their respective enthusiasm about the AI revolution, the U.S., China, and Russia might all recognize the benefits of pragmatic measures aimed at risk reduction.
Could great powers pursue comparable engagements on measures to enhance the surety, safety, and security of AI systems in military use? Going forward, major militaries might commit as best practices to ensure certain standards of testing and to introduce fail-safes or “circuit breakers” in order to assure control of AI-enabled and robotic systems. Perhaps, there might also be shared interests in security cooperation to reduce the threat of these capabilities diffusing to non-state threat actors.
While the current competition in AI transcends the narrow notion of “arms race,” the risks of rapid advances and military competition in this strategic technology are real. We must start now to evaluate and reduce these risks.