Libya’s UAV Strike Should Galvanize Efforts to Do Something About Autonomous Weapons
Thorny definitional questions aren’t going to get easier, but the time to settle them has come.
The age of autonomous warfare has arrived. Or so it would seem.
According to a recent UN report, last spring members of Libya’s Government of National Accord used Turkish-made STM Kargu-2 drones to strike a column of Libyan National Army forces retreating from Tripoli. But this was not like previous GNA drone strikes. According to the report’s description of the incident, which was first reported on by Zak Kallenborn and David Hambling, the drones in this case “were programmed to attack targets without requiring data connectivity between the operator and the munition.”
In other words, they were, as the report put it, “lethal autonomous weapons systems.”
This sounds like a major milestone. For more than a decade, the international community has debated whether LAWS—as they’re known for short—should be regulated or banned. For the most part, this debate has worked on the assumption that these technologies do not yet exist. With this new strike, have we finally crossed the threshold?
It’s impossible to say. And that’s exactly the point.
If the Kargu systems had the capacity to execute all the steps of the strike cycle and distinguish between a wide range of different targets based on subtle indicators such as facial features (as the manufacturer claims) then they could certainly fall within most definitions of LAWS.
On the other hand, if they simply used algorithms to lock onto and track targets through their video cameras, they would be more akin to existing weapons, such as air defense systems and heat-seeking missiles, that do not rise to true “autonomy” even when there’s no human in or on the loop.
Unfortunately, the UN report offers scant technical detail. But even if we did have more information, the story wouldn’t necessarily be much clearer.
In all likelihood, the drones had some capacity to identify moving objects in video—potentially including the ability to distinguish people from other objects like cars and buildings—but lacked some of the other features generally associated with true “lethal autonomy,” such as the ability to prioritize objectives, execute complex tactics dynamically or make decisions according to the legal principles of conflict. Even if the human operators didn’t directly control the weapons, they still “programmed” them to conduct the mission—a form of human control. So they were probably not quite killer robots, but also not quite dumb weapons, either.
We shouldn’t waste time debating whether they were one or the other. The years ahead are likely to see many more cases of such weapon systems like the Kargu that fall in this fuzzy gray zone between automation and autonomy. Developing a definition of LAWS that is broad enough to encompass this gray zone but that is also sufficiently specific as to be meaningful could prove to be an elusive goal, especially when accounting for the many variations of human control that can be exercised in or on the loop.
And even if we were able to settle on a universally accepted definition, the Kargu incident highlights just how difficult it would be to actually verify that any given system meets it. For those observing the drones from the ground during the attack, it would have been hard to say whether, as the report claims, the weapons had no human “connectivity.” An autonomous drone looks just like a non-autonomous one.
Nor do the system’s positively sci-fi-esque technical specifications on paper provide much solid evidence to work with. An eager defense contractor’s grand claims of “artificial intelligence” in its marketing copy are often about as credible as a malign actor’s denials that their autonomous weapons always keep a human in the loop (or that they didn’t violate any arms embargoes, which the Kargus most certainly did).
This could frustrate the enforcement of any rules that hinge on a broad definition of LAWS centered around the notion of “autonomy.” Even analyzing the physical weapon’s innards won’t always yield a clear picture of what it can, and cannot, do.
These nitty gritty questions of verification often don’t get much airtime in the debate on LAWS; the Kargu incident shows that it’s time for that to change.
It also shows that even when not-quite autonomous weapons don’t meet most definitions of “autonomy,” that doesn’t mean they don’t pose novel challenges. Whatever their capabilities, these drones probably exhibit the inevitable, unpredictable failures that are characteristic of all systems with advanced autonomous features.
In the kind of complex environment where the drones “hunted down” its targets, as the report put it, they would not just exhibit such failures rarely, but constantly.
The fact that such a system saw real-world use in spite of its likely reliability issues is proof that the technology will be used regardless of whether it’s ready or not. The earliest adopters of lethal autonomous weapons aren’t going to be the rich and generally risk-averse states currently leading in the development of the technology, but rather those actors who are willing to have a go at using highly imperfect weapons, with little concern for the collateral harms that could arise from their poor performance. That’s a troubling prospect.
The international community should therefore study this incident closely. Even if this wasn’t a true LAWS strike, whatever that means exactly, it provides the clearest case-study yet for some of the pressing questions that need to be answered in time for the UN’s important review of the Convention on Certain Conventional Weapons in December.
How could future norms be fashioned to take into account those weapons that fall in the definitional gray zone between “autonomous” and “automatic”? How do you certify the accuracy, reliability, and predictability of any weapon that has autonomous features, and verify that they meet basic standards of safety and legality? And most urgently, what are the potential malign uses of a system like this, and how can these risks be prevented right away?
We have six months to find some answers. Let’s make them count.