Russia to the United Nations: Don’t Try to Stop Us From Building Killer Robots
UN efforts to limit or regulate military AI may be failing before they even begin.
Arms control advocates had reason for hope when scores of countries met at the United Nations in Geneva last week to discuss the future of lethal autonomous weapons systems, or LAWS. Unlike previous meetings, this one involved a Group of Governmental Experts, a big bump in diplomatic formality and consequence, and those experts had a mandate to better define lethal autonomy in weapons. But hopes for even a small first step toward restricting “killer robots” were dashed as the meeting unfolded. Russia announced that it would adhere to no international ban, moratorium or regulation on such weapons. Complicating the issue, the meeting was run in a way that made any meaningful progress toward defining (and thus eventually regulating) LAWS nearly impossible. Multiple attendees pointed out that that played directly toward Russia’s interests.
Russia’s Nov. 10 statement amounts to a lawyerly attempt to undermine any progress toward a ban. It argues that defining “lethal autonomous robots” is too hard, not yet necessary, and a threat to legitimate technology development.
“According to the Russian Federation, the lack of working samples of such weapons systems remains the main problem in the discussion on LAWS…this can hardly be considered as an argument for taking preventive prohibitive or restrictive measures against LAWS being a by far more complex and wide class of weapons of which the current understanding of humankind is rather approximate,” it says and goes on to warn that too much effort to ban lethal robots could have an unintended chilling effect on AI generally. “The difficulty of making a clear distinction between civilian and military developments of autonomous systems based on the same technologies is still an essential obstacle in the discussion on LAWS. It is hardly acceptable for the work on LAWS to restrict the freedom to enjoy the benefits of autonomous technologies being the future of humankind.”
An attendee who did not feel comfortable providing a name on the record, given the highly sensitive nature of the talks, said that “the Russians are not interested in making progress on this." When asked if the lack of progress during the meeting, an effect of the unusual way the meeting was run, seemed to serve Russia’s interests, the participant responded: “Yes, of course.”
Multiple attendees put much of the blame for that on Indian Ambassador Amandeep Singh Gil, the chairperson of the Group of Governmental Experts, essentially, the UN official sanctioned to run the meeting. In both Gil’s comments and in a position paper he put forward, he echoed aspects of the Russian position.
More importantly, Gil approached the entire five-day meeting in such a way that any made any progress toward defining and thus, perhaps one day, regulating, killer robots very difficult, they said. Rather than look at serious proposals and position papers put forward by governmental delegations, Gil presided over a chaotic and ultimately inconsequential discussion of AI generally, barely touching on the stated purpose of the meeting during the five days.
At one point, he even shut down ambassadors and delegates who tried to turn the meeting back to the work of defining lethal robots. “A lot states came prepared to talk about definitions. That’s what the mandate was” said one participant. For a governmental delegation “to put out a position paper like that, it has to get vetted through a lot of parts of your government… it was discouraging. It’s important that States feel like they’re vested in the process.” That didn’t happen, said the participant.
Defense One has reached out to Gil’s office for comment and will update this report when we hear back.
Other attendees noted that Russian defense contractors, notably Kalashnikov, are already marketing weapons with artificial intelligence features such as autonomous targeting and firing. Defining a killer robot doesn’t seem to be an obstacle when the objective is selling them.
“One of the things that's a bit incongruous about Russia's position is that their own defense companies have made claims about developing autonomous weapons: So while you have Russia saying ‘we shouldn't talk about these weapons because they don't exist,’ it sure looks like Russian companies are racing to develop them,” said Paul Scharre, a senior fellow and director of the Technology and National Security Program at the Center for a New American Security. (Scharre is also the author of the forthcoming book, Army of None: Autonomous Weapons and the Future of War.) He pointed to numerous instances where Russian commanders had essentially announced both the intent and the willingness to develop the sorts of weapons that they can’t define. “I would like to hear Russia clarify its position and intentions. The United States has a detailed policy in place on how it intends to approach the issue of autonomous weapons,” he said.
But Sam Bendett, an associate research analyst with the Center for Naval Analyses’ Russia Studies Program and a fellow in Russia Studies at the American Foreign Policy Council, argued that the Russian position was more nuanced than the strongest language in their statement suggests. “Russians are also unsure how exactly AI-driven military robotics systems would function given that artificial intelligence in a battlefield capacity is still an evolving concept,” he said.
But Bendett’s work also documents growing Russian interest in developing and fielding weapons that use increasingly sophisticated AI. In 2014, the Russian Ministry of Defense launched a comprehensive plan for the development of prospective military robotics through 2025. In 2016 the Russians launched an annual conference, “Roboticization of the Armed Forces Of the Russian Federation.” Bendett believes that Russian defense spending in AI will grow since the Ministry of Defense has at least 10 research centers looking at applications for autonomy in warfare. And of course Russian President Vladimir Putin has even said that the nation that leads in AI will rule the world.
“Russia taking a defensive stance against an international body seeking to regulate weapons other than destructive nuclear bombs should not have been such a surprise. After all, in many international forums, Russia stresses the ‘sovereignty of nations free to pursue their own political/military/economic course’ as a cornerstone of an international order they envision as a better alternative to the unipolar world with the United States in the lead,” said Bendett.