Inside the Navy’s Secret Swarm Robot Experiment
Swarming robot boats could be heading to a contested strait near you. By Patrick Tucker
It’s August on Virginia's James River and a secret military exercise is about to make history. A large ship that the Navy sometimes calls a high-value unit, HVU, is making its way down the river’s thalweg, escorted by 13 small guard boats. Between them, they carry a variety of payloads, loud speakers and flashing lights, a .50-caliber machine gun and a microwave direct energy weapon or heat ray.
A helicopter crew overhead spies a suspicious “enemy” boat that seems to be moving too close to the HVU. Messages are relayed and the small escort boats begin moving. Detecting the enemy vessel with radar and infrared sensors, they perform a series of maneuvers to encircle the craft, coming close enough to the boat to engage it and near enough to one another to seal off any potential escape or access to the ship they are guarding. They blast warnings via loudspeaker and flash their lights. The HVU is now free to safely move away.
What made this particular exercise remarkable was that the 13 boats were not only unmanned, but displayed an unprecedented degree of autonomy. In a recent briefing with reporters, Rear Adm. Matthew Klunder, chief of the Office of Naval Research (ONR), pointed out that a maneuver that required 40 people had just dropped down to just one.
Much of the discussion and fear of armed unmanned vehicles ignores a central fact. Aerial drones like the Predator or Reaper are operated by two-man human teams, a pilot to steer the drone and a sensor operator to control the various mechanical eyes and ears. The boats that participated in the event on the James River were able to sense one another as well as other vessels, and execute complicated “swarm” maneuvers, with a bare minimum of guidance. These boats are not your average drones.
“Think about it as replicating the functions that a human boat pilot would do. We’ve taken that capability and extended it to multiple [unmanned surface vehicles] operating together… within that, we’ve designed team behaviors,” Robert Brizzolara, the manager of the SWARM program for ONR, told reporters.
At one point in his briefing, Klunder held up a cube about the size of a paper weight of circuits stacked on top one another. The unit is called the Control Architecture for Robotic Agent Command and Sensing or CARACaS. It allows the boats to “operate autonomously, without a sailor physically needing to be at the controls—including operating in sync with other unmanned vessels; choosing their own routes; swarming to interdict enemy vessels; and escorting/protecting naval assets,” according to an ONR description. “Any boat can be fitted with a kit that allows it to act autonomously and swarm on a potential threat.”
Though 13 was the number needed for that particular exercise, Klunder envisions future maneuvers with 20 or even 30 boats. He said the system will be fully operational next year.
Where Do Robotic Swarms Come From?
NASA originally designed the system for the Mars Rover. ONR adapted it for the Navy’s needs but the philosophical history of swarm robotics can be traced to this 1995 paper in which artificial intelligence researchers James Kennedy and Russell Eberhardt argue that the collective behaviors that birds, fish, insects and humans display in response to rewards or threats could be captured mathematically and brought to bear on improving artificially intelligent entities in a simulation.
The “social sharing of information among conspeciates [sic] offers an evolutionary advantage,” they observe, borrowing a bit of wisdom from biologist E.O. Wilson. Kennedy and Eberhardt lay out some the major tenets for writing algorithms to mimic natural flocking or schooling behavior. It’s a matter of quickly rating different known variables, threat, reward, and environment. The growing availability of small drones have transformed robot swarms from an obscure academic concept into a YouTube sensation. Consider this 2012 demo showing how University of Pennsylvania researchers turned a series of small robotic quad helicopters into musicians, which got three million views.
YouTube swarm stunts seem to grow by size and complexity faster than companies can make smart phones. Last month, Harvard researcher Radhika Nagpal demonstrated the largest robotic swarm, 1,024 small bots collaborating wordlessly to create a variety of different shapes.
It’s an ongoing area of military investment as well, most notably the U.S. Army Research lab’s Micro-Autonomous Systems Technology or MAST program , which has awarded millions in grants to develop swarms of tiny flying bug robots for surveillance and intelligence gathering missions.
The 13-boat swarm fleet that the Navy demonstrated last month may not seem momentous in comparison to flying bots with musical ability, but it actually represents a big breakthrough.
The units demonstrate a number of behaviors that we associate with the presence of a humanistic pre-frontal cortex. They can plan different actions to take in response to rapidly changing circumstances, weighing costs versus benefits of taking one route or another and do so in perfect collaboration in a chaotic environment. While it’s true that they share situational information with one another, they also operate independently. The video of the demo looks like choreography. But that’s a description the Navy pushes back against. The planning takes place rapidly, just as it would in a human brain when presented with reward or danger in natural setting.
The software that moves the Navy’s swarm bot boats is “far more developed than just bees,” says Klunder.
The Navy is eager to keep the secret sauce under the lid, but the scope of the problem, the modeling challenges and mathematical solutions, can be gleaned in this recent paper titled Model-P redictive A sset G uarding by T eam of A utonomous S urface V ehicles I n E nvironment W ith C ivilian B oat . The research isn't directly related to the Navy experiment, but there's a lot of overlap. “The outlined problem can be decomposed into multiple components, e.g., accelerated simulation, trajectory planning for collision- free guidance, learning of interception behaviors, and multi-agent task allocation and planning," the researchers write.
The Navy's breakthrough marks the clearest indication yet that more missions are falling to increasingly automated—and weaponized—systems, with human presence retreating ever deeper into the background. It's a trend that continues to alarm both AI experts and human rights watchers.
’Don’t’ Make Them… Autonomous’
Last May, British artificial intelligence researcher Noel Sharkey of the University of Sheffield , told Defense One that, in his view, armed UAVs were proliferating far too quickly and that the last red line to be crossed was autonomy. “Don’t go to the next step. Don’t make them fully autonomous. That will proliferate just as quickly and then you are really going to be sunk,” he said.
Sharkey is not alone in that concern. Political scientist Matthew Bolton of Pace University New York City’s Dyson College offered a similar opinion. “Growing autonomy in weapons poses a grave threat to humanitarian and human rights law, as well as international peace and security… In modern combat it is often heartbreakingly difficult to tell the difference between a fighter and a non-combatant. Such a task relies on a soldier’s wisdom, discretion and judgment; it cannot and should not be outsourced to a machine. Death by algorithm represents a violation of a person’s inherent right to life, dignity and due process.”
Bolton points to the international bans on landmines as an indicator of where the debate over autonomous weapons systems is headed. “When the vast majority of countries outlawed anti-personnel landmines -- a goal now endorsed by President Obama -- they established that weapons which maim or kill absent of direct human control are morally reprehensible.”
Other AI experts take a more nuanced view. Building more autonomy into weaponized robotics can be dangerous, according to computer scientist and entrepreneur Steven Omohundro. But the dangers can be mitigated through proper design.
“There is a competition to develop systems which are faster, smarter and more unpredictable than an adversary's. As this puts pressure toward more autonomous decision-making, it will be critical to ensure that these systems behave in alignment with our ethical principles. The security of these systems is also of critical importance because hackers, criminals, or enemies who take control of autonomous attack systems could wreak enormous havoc,” said Omohundro.
Klunder said they’ve built three fail safes into the system. In the event that one of the boats loses contact, it goes dead in the water.
The aspect of the program that Klunder seems most proud of is how much money it could save. Not only is the CARACaS unit is made of cheap, off-the-shelf parts, it can be fitted to a variety of the Navy’s rigid inflatable or RIB boats so no pricy new frames necessary. The brains receive input from regular radar (360 degrees) and conventional electro-optical infrared or EO/IR sensors, which are hardly exotic.
The biggest cost to the program was developing the algorithms. The possibility exists for much larger cost savings by reducing multi-person missions down to single operator tasks. Providing safe passage through places like the Strait of Hormuz just got a lot cheaper.
In a recent report preview from the Center for New American Security, Paul Scharre and James Marshall described the transition to low-cost, more autonomous robotic systems as the force multiplier of the future. “Low-cost uninhabited systems offer a way to bring mass back to the fight. With no human onboard, they can take greater risk. Survivability can be balanced against cost, with swarm resiliency taking the place of platform survivability. Swarms of low-cost uninhabited systems can be used to saturate and overwhelm enemy defenses. The robotics revolution will enable new ways of bringing mass back on the battlefield.”
But recent Defense Department budget decisions actually reflect a waning enthusiasm for unmanned systems, as Alex Velez-Green notes in a provocative piece for the Harvard Political Review, in which casts funding for AI development as hampered by sunk cost projects such as the F-35. “ Unfortunately, the Department of Defense’s current investment outlook does not show an appreciation for the role that swarm robotics will play in the future of warfare. Today, we are investing more than $35 billion in the Littoral Combat Ship program, and expect to spend more than $25 billion in the next several years to make the new, manned Long-Range Strike Bomber deployable by the mid-2020s. Such manned systems will be necessary complements to unmanned systems for the foreseeable future. However, their development cannot come at the expense of the robotic technology that will actually disrupt combat, which is exactly what is happening today,” he writes.
The debate about the ethics of increasingly smart—and ever-more heavily armed--military robots will continue as the technology advances and the systems proliferate around the world . The Beltway battle between those who think the Defense Department is underinvesting in AI at the expense of boondoggles like the F-35 Joint Strike Fighter will also grow more heated as dollars grow more scarce. For Klunder, the issue is more personal.
The timing of the ONR briefing happens to coincide with the 14-year anniversary of the bombing of the USS Cole off the coast of Yemen that killed 17 sailors. It’s an anniversary that Klunder observes with a unique sense of responsibility. “If we had this capability there on that day. We could have saved that ship,” he said. “I never want to see the USS Cole happen again.”
NEXT STORY: This Could Be the Navy SEAL's Boat of Tomorrow