'Swarm pilots' will need new tactics—and entirely new training methods: Air Force special-ops chief
AFSOC will expand on groundbreaking experiments this summer, Lt. Gen. Bauernfeind says in interview.
TAMPA—Experiments with ever-larger drone swarms are revealing a need for new concepts of operations and new ways of training human operators, the Air Force Special Operations Forces Command says.
In the next few months, AFSOC will expand upon a groundbreaking December experiment that saw a single drone crew guide not one but three MQ-9 Reapers and even to air-launch a smaller Group 2 drone as part of the command’s Adaptive Airborne Enterprise effort. Now, the command aims to repeat the experiment with even more drones—and add the ability to hand off control to troops in the field.
“The hope with the summer now is: how can we start to bring those aspects together and then work with our joint force teammates? And now how do we manage multiple MQ-9s air-launching a small number of” smaller drones, “and then hand that swarm off to a joint force teammate, whether in a terrestrial or maritime situation?” Lt. Gen. Tony Bauernfeind said in an interview at the SOF Week conference here.
But the Air Force still has some pioneering work to do in designing concepts of operation for piloting drone swarms, and that means more focus on what aspects of flight or drone operation to automate and what to leave to humans.
“We're gonna have to break some old paradigms,” Bauernfeind said. “We really have to reinforce to ourselves that it's going to be a human on the loop, not in the loop”—that is, the operator will monitor a drone's execution of its assigned mission rather than steering the thing. “Cognitively, it will require us to train our air crews in a new way.”
“I think is going to be a new opportunity,” he said. “You got to figure out how to handle an epic level of multitasking.”
The task could get even more complicated, depending on how many of the drones are expected to return home, he said.
Bauernfeind is also interested in how Ukrainian forces are using 3-D printers to make small drones near the front line. “3-D printing is really bringing in a new generation of [innovation], how quickly we can mass produce some of the smaller UAVs. And so I see an opportunity there. How can we quickly ensure we have the right levels of stock, the right levels of sensors? And so it's pretty impressive to see where some of our industrial teammates are going with 3-D printing.”
But some innovations in the Ukrainian battlespace are more controversial, such as the use of autonomy to find and hit targets on the battlefield. The Pentagon has ethical principles to govern its development and use of AI in conflict. But concern is mounting that the United States might abandon those principles if it found itself in a conflict in which it was losing.
Those questions aren’t likely to go away anytime soon. Said Bauernfeind: “I think this is an area that is ripe for deep intellectual thought. And what I mean by that is, I think, technically we're going down a pathway where automation is a real aspect. But I think we have to have some very deep strategic intellectual thought on where should that balance be? So while we're learning lessons from Ukraine, there's also an aspect of Ukraine is a nation in the fight of survival. So there are certain, probably, policy constraints that they have taken away because they see it as an existential threat to their actual survival as a nation.”
It’s an urgent question not just for military commanders, he said, but also U.S. policymakers and academia. “Are we ready for the second-, third-order effects when…a machine ultimately fails and hits something that has catastrophic political and strategic effects?”
So far, the answer seems to be: not yet.