How Blitzkrieg (Sort Of) Explains Killer Robots
No one has yet solved the problems of multiple flows of control, but it’s worth studying how past generations have approached them.
Here in DefenseOne, Paul Scharre and Michael Horowitz recently noted some novel challenges of controlling swarms of multiple robots on the battlefield. Predicting and controlling the collective behavior of multiple machines interacting with each other, humans, opponents, and civilians is certainly an enormous problem, and the skills necessary to be successful at it may lie in unlikely places (such as attention- and control-intensive E-Sports computer games). Much strategic thought will be needed about how to manage and control such a problem, should it be fielded on the battlefield.
Yet the entire reason that we have such a problem is perhaps not due to artificial intelligence at all. Remember the “blitzkrieg” of the 1930s? Why do we need distributed control and swarms in the first place, as opposed to one commander robot or a human? The similarities between the blitzkrieg’s motivating problems of the myriad tactical and operational ideas and modern issues with the command and control of multi-robot system is a lesson in how wars past may help explain the problems of the future.
At first blush, the challenges of controlling a set of highly mobile human maneuver forces and the problems of controlling a team of robots have nothing to do with each other. What could, say, German tanks rolling over the Ardennes in 1940 have to do with robots or artificial intelligence? But while these solutions, and their consequences and tradeoffs, may be novel, at heart they deal with a similar problem: the inability of a singular system entity to efficiently process a large amount of information.
Writing in 1993, the philosopher Manuel De Landa took the perspective of a “robot historian” examining how political, economic, and military developments coalesced into the automation of government, war, and intelligence by increasingly intelligent machines. A software engineer by training, De Landa used computational analogies to talk about military history. De Landa noted parallels between challenges of information processing in computer engineering and shifts in military organization and command and control. More difficult processing demands resulted in mechanisms to distribute computation first over the internal subsystems of one machine and later over a collection of machines. That way, by sharing the burden, computer engineers could handle problems that might otherwise be problematic in one machine.
Similarly, De Landa notes that the armies of Europe from the late 19th century to World War II might be viewed as a system that distributed information processing and decisionmaking to help cope with the demands of fighting increasingly large and distributed land campaigns that demanded speedier decisions in the face of uncertainty. In an uncertain environment with heavy information-processing demands, the best solution was not one godlike commander but a commander presiding more indirectly over a distributed system.
However, coordinating any system with more than one flow of control is difficult. This is especially true when it comes to multi-AI systems that must cooperate to achieve a common goal. You won’t be shocked to learn that many of the same problems of trust, synchronization, and coordination observed in social life also occur in these computer engineering problems. One canonical problem in distributed systems was inspired by warfare in the Byzantine era. Consider two generals who must cooperate with each other to attack a common opponent. However, they can only communicate via messengers who may be captured by the enemy. What kind of communication system can they use to (safely) reach a consensus about the plan of attack?
Programs that automate human activities often replicate problems and contexts that have first been observed elsewhere. The notion of a “swarm,” after all, has some very particular roots in the study of insect colonies. While this is certainly a benefit, it is also important to remember that many of these foundational problems of coordination and control have never been completely solved. Rather, we often just find “good enough” solutions.
There are, for example, similarities between the problems of decentralizing authority in machine systems and the problems of nuclear command and control. A more decentralized and distributed system with faster decision cycles promises greater survivability in the face of an enemy attack. However, it also may heighten the risk of nuclear accidents if more people can push the button; and quicker decision cycles may just mean a fast path to an otherwise undesirable and catastrophic nuclear exchange. It may be tempting to say that the system worked because we’re all alive today, but that could also just be our survivorship bias talking. It’s still open to debate.
One may quibble with De Landa’s chronology and history – the tradeoff he observes between convenience and control is not exclusive to modern mobile warfare – but the lesson is instructive. The particulars of how technology and security influence each other are difficult to predict, and we should always be aware when disruptive innovations require jettisoning old assumptions. However, given the manner in which the same old problems seem to re-occur (like controlling a large, distributed, and mobile army or navy), the problem with the future of war may just be that we don’t pay enough attention to how people dealt with the problems of its past.