Retired Marine Four-Star Patents Jet-Killing Drone Boat
John Allen also has a patent on a quadcopter mothership.
Two patents were recently awarded for concepts that just might change naval warfare: one is for a seagoing mothership for quadcopters designed to fake out enemy fighter pilots, while the other envisions a drone boat equipped to spot and shoot down enemy aircraft.
The patents were awarded earlier this month to John Allen, the retired Marine Corps four-star who one commanded U.S. forces in Afghanistan, and Amir Husain, who leads the Austin, Texas-based artificial-intelligence firm SparkCognition.
“A plurality of submersible vessels can cooperatively engage threats,” reads the second patent. “For example, the plurality of submersible vessels can coordinate with each other to observe, confirm, track, and engage threats by efficiently allocating resources, such as ordnance, among themselves. As one example, the plurality of submersible vessels can create a ‘dome’ of protection around assets, such as naval vessels or civilian vessels.”
This suggests that the drones could function with little human guidance, if commanders wanted them to.
Related: The Marines Are Giving Quadcopters to Every Squad
Related: Tomorrow’s Cargo Drones Won’t Look Much Like Today’s Helicopters
Related: US Navy Wants a Next-Gen Supply Network — and Fast
Since 2012, Pentagon doctrine has required human control over weapons that can take a human life. But if you’ve watched the discussion over lethal autonomous weapons over the last several years, you know that the definition of “control” is a subject of intense debate.
“The design of the system certainly makes it capable” of firing autonomously, Husain said. “But the way in which a proposed system such as this is operationalized in the battlefield depends on many factors, and ultimately, international law and the policies of the United States Government as manifested in the oversight of the DoD and decision-making of the relevant commanders.”
Historically, the U.S. military has been more willing to employ autonomous weapons in the maritime environment, where they’re less likely to cause unintended civilian casualties than on land. For example, the Phalanx close-in weapon system, first developed in the 1970s, which can autonomously track and fire on incoming missiles and aircraft.
Weapons like the Phalanx blur the line, somewhat, between autonomous and non-autonomous weapons. It makes its own targeting and firing decisions when it perceives a threat that matches a certain set of criteria, such as an object of a certain size moving toward the ship at a certain speed, etc. But the human operator can override a decision to fire. This is commonly referred to as “on-the-loop” control, as opposed to the more intimate “in-the-loop” control. It’s like the difference between ordering a new artisanal jam every month or joining a “jam of the month” club and getting a new jam without having to ask for it each time. The individual is still in control but the weapon has more agency to perform functions on its own.
Husain said while the distinction between autonomous and nonautonomous is important, it can serve as a way of avoiding a tougher debate: can robotic weapons be safer than their manned alternative? “We see autonomous systems as platforms that can potentially deliver smaller kinetic effects with far greater precision, thus reducing the unintended damage. Autonomous systems should present a superior and more humane option than pulling the lanyard on a loaded artillery piece,” he said. “You can tie a string to a machine-gun trigger, jam the accelerator of an explosive-laden jeep, turn a Phalanx cannon in autonomous mode and walk away, and use basic guidance capabilities to launch motorboats with incendiary payloads into fleets of ships. The issue isn't whether autonomous systems can be constructed. The issue is rather how well they will work in achieving their aim; which is not to maximize damage, but actually to reduce it.”
Despite the recent coverage of the military’s use of AI in evaluating aerial imagery, the field is actually moving faster in the naval realm. In Octobter 2014, the Navy announced the results of a historic experiment in which 13 boats demonstrated sophisticated unmanned and unguided maneuvering, encircling an “enemy” vessel and maneuvering to engage it with (notional) machine guns and lasers, all with virtually no human guidance. Two months later, the vessels showed off more advanced targeting decisions.
Last spring, the Navy took possession of the world’s first crewless warship, the Sea Hunter, after some six years of development and testing with the Defense Advanced Research Projects Agency, or DARPA, and the Office of Naval Research, or ONR. Not long after the transfer, nearly every aspect of the ship became classified. Though originally conceived for mine hunting, military leaders say the ship’s real value comes from its ability to perform a wide variety of missions for months at a time with virtually no human guidance.
In April, former Deputy Secretary of Defense Bob Work speculated on how much more useful such a ship would be once armed with missiles. “We might be able to put a six-pack or a four-pack of missiles on them. Now imagine 50 of these distributed and operating together under the hands of a flotilla commander," Work said.
Allen played a number of roles in his 37-year military career, including commanding NATO’s International Security Assistance Force in Afghanistan. After leaving military service and publishing an oped in Defense One, he became the Obama administration’s special presidential envoy for the Global Coalition to Counter ISIL. Since retiring from active duty, he has devoted more time to the effects of artificial technology on warfare and national security.
At this year’s Globsec forum in Bratislava, Slovakia, Allenstressed the importance of keeping meaningful human control over weapons, for ethical, legal, and societal reasons. He acknowledged that such a preference could come at an operational cost as the pace of warfare accelerates, a nearly certain consequence of greater automation.
“When your competitors have taken the human out of the loop,” he said, “by definition they’ll be able to move faster. We have to think in very serious ways, analytical ways, how that human-like dimension is in the kill chain, It might be we are driven at some point to a level of capacity and specificity in the algorithm itself, that we have trained that algorithm, and we have geofenced that capability of that system to deliver ordnance to give us, as best we can, the presence of the human in the loop. Otherwise, we will just be slow. And in a hyperwar scenario, being slow means you will be defeated.”
Watch the video below.
NEXT STORY: DARPA Wants to Find Botnets Before They Attack