Ep. 74: The next big thing(s) in unmanned systems

'Where does the human stand here?'

Google Play Apple Podcasts

This episode, we'll explore emerging trends in unmanned systems. We’ll start in the air, before turning to the land and sea in a review of Russian-made systems and military thinking. And we’ll end with a discussion about trust and artificial intelligence. (Music by Bob Bradley, Paul Clarvis, Thomas Balmforth; Guy Farley, Andrew Carroll; Richard Lacy; Paul Mottram; Jeff Meegan, David Tobin, Rob Kelly; Theo Travis, Paul Ressel; Sue Verran, Paul Ressel; and David Kelly β€” via Audionetwork.com)

Part One: The aerial events in Arizona, Colorado and Mexico (at the 3:41 mark);
Part Two: From Russia, by land and sea (24:10);
Part Three: Beyond AlphaDogfight (33:48)
.

Guests include:

  • Arthur Holland Michel, associate researcher at the UN Institute for Disarmament Research in Geneva, where he researches autonomous systems and artificial intelligence.Β 
  • Brett Velicovich, U.S. Army veteran and author of the book β€œDrone Warrior.” 
  • Samuel Bendett, analyst with the Center for Naval Analyses' International Affairs Group.

This episode is underwritten by Aerovironment.

Subscribe either on Google Play,Β iTunes, or Overcast, or wherever you listen to podcasts. Thanks for listening!


[A transcription of this episode follows.]

Something very weird happened last September in the skies over Arizona, just about a half-hour drive from Phoenix.Β 

And it was almost nine o’clock at night when about half a dozen little buzzing drones β€” each about two feet across β€”Β showed up together over Arizona’s Palo Verde nuclear Generating Station. This is a station that helps provide electricity to Tucson, San Diego and even Los Angeles. The Drive, which first reported on this incident in late July, called it β€œAmerica’s most powerful nuclear plant.” 

No small facility. And no facility of small importance. And yet there its security guards stood, defenseless down there on the ground as five or six small drones flew in circles above a seemingly specific building.Β 

Somewhat curiously, these drones all approached with an attached spotlight illuminating. Then, when they entered the secure area, they flipped them off.Β 

They stayed just about 300 feet off the ground and got as low as 200 feet. The FAA says you can’t fly less than 400 feet above national security sensitive or critical infrastructure. Arizona state law puts it at no less than 500 feet.Β 

And still β€”Β all the guards could do was watch. And for about 90 solid minutes that’s what they did as these drones zipped about transmitting information to some human or humans somewhere not far away.Β 

The next night, they came back. And at pretty much the same time, doing pretty much the same thing. There were seemingly four this time instead of six. Then an hour later, they were gone. Guards seemed to think they might have launched from a mountain range nearby. But local police reportedly never found anything when they looked.Β 

And so it’s all very weird. Who would risk that kind of flying, with that many drones, and at that kind of location? And maybe most crucially β€” what do you do about this threat, even if it is a monitoring drone, or in this case series of drones?Β 

We’re going to explore some of those questions and a lot more in this episode, which is all about what’s new in unmanned systems. We’ll start from the air, before turning to the land and sea with a review of Russian-made systems. And we’ll end with a discussion about artificial intelligence β€” since an Air Force pilot just lost to an AI system in a recent virtual dogfight.


The aerial events in Arizona, Colorado and Mexico

Here’s America’s top military commander in the Middle East, Marine Gen. Frank McKenzie speaking at an event with the Middle East Institute in July. He called the threat from small drones the biggest known but hard to fight security problem. Sometimes people call these white swan events.Β 

McKenzie This is more of a white swan because I think we see the contours of it now, but I'll begin by it. It is the proliferation of small unmanned aerial platforms in the theater. I argue all the time with my Air Force friends that the future of flight is vertical and it's unmanned. And I believe we are seeing it now. And I'm not talking about the large unmanned platforms which are the size of a conventional fighter jet that we can see and deal with as we would any other platform. I'm talking about one that you can go out and buy at Costco right now in the United States for a thousand dollars, you know a four-quad rotorcraft or something like that that can be launched and flown and with very simple modifications it can be made into something that can drop a weapon -- a hand grenade or something else. Right now, the fact of the matter is we're on the wrong side of that equation.”

And that’s coming from a guy who has walked the ISIS battlefield in Iraq and Syria, where drones β€” off the shelf stuff from Amazon and CostCo, like McKenzie said β€” had been modified to drop grenades and to film suicide attacks for propaganda clips on social media.

McKenzie: β€œ[T]he fact of the matter is we're on the wrong side of that equation.”

The U.S. military isn’t the only organization on the wrong side of that equation β€”Β as we revealed with the episode in Arizona.Β 

I called up a researcher named Arthur Holland Michel.Β 

He works at the UN Institute for Disarmament Research in Geneva, where he researches autonomous systems and artificial intelligence.

Watson: So I want to start with a bit of the sensational, because there are still lots of unknowns from this alleged drone swarm episode in Arizona. Believe it happened almost a year ago. A couple of things stand out to me. Number one, so the sheer numbers of drones, plural was remarkable. As a reader of the story, the loiter time, they kind of stayed over their target for a while doing who knows what I can imagine things that they were doing, that you would need to look for a bit of time for. And the last thing that was you know, that stood out to me was they came back a second day. When you saw that kind of news report, wasn't that long ago that it came out? What stands out to you about that?

Michel: Well, I guess the first thing I should say is that it didn't come as a surprise. You know, this isn't the first time that drones plural, have been spotted in US airspace in places where they perhaps shouldn't be. And it's not the first time that the crime was never solved or the incidents were never cracked, so to speak. And so really, all the points that you just picked up on are crucial because they are all in a way evidence of the fact that this incident β€” along with another incident in Colorado that actually happened a couple of months later, the great Colorado drone mission. Similarly, a bunch of drones was seen flying over a very wide area and several Colorado and a few other different states over several days.

CBS clip: The FBI is helping investigate the growing mystery of unidentified drones flying over two states. The drones have been spotted buzzing hundreds of feet in the air at night over Colorado and Nebraska…

Michel: What they show is that the drone security problem, if you will, is very far from from being sorted out. So, as you say, you know, the way you describe it, we're talking about one of the most sensitive facilities in the US and now it's almost an improbable stock illustration of just how far we are from sort of cracking the code on unmanned security. As you said, there were several of them. It wasn't like this was a sort of secretive drone that was hard to spot. I mean, everyone could see them. They were able to stay over the facility, you know, for about an hour in both cases. And so that, you know, it's not like the security staff only noticed them after they had left. And they were well aware of them, and there wasn't much they could do about it. And then the fact that they returned the next day shows that these operators of these drones were able to work with total impunity, right? I mean, they had a high degree of confidence that they could return the very next day and that they still wouldn't be caught. I mean, that's kind of amazing. And, and you might be wondering why I mean, you know, how is this possible? Well, as it turns out, you know, the sky is a very, very large place and drones [are], believe it or not, very, very small, relatively speaking; and, and crucially, they're invisible.

Invisible.Β 

Velicovich: Right, so a lot of other countries are really focused on what is called remote ID remote identification.

That’s Brett Velicovich, an Army Veteran, National Security Analyst, and author of the book β€œDrone Warrior.” Last time I spoke to him back in episode 21, he was in Kenya. He’s talking about remote identification, which would have been super helpful to the guards in Palo Verde.

Velicovich: (cont.) How do you identify? How does law enforcement identify what is in the airspace? Because it's easy to know that a commercial airliner is. β€˜Is there a rotorwing aircraft and helicopters flying?’ But drones that are off the shelf? You can't really see them. And so everyone's focused on this remote identification and can you put a system in the control tower of an airport to say, β€˜Hey, don't take off at this moment. We've got three drones that are, you know, moving 50 miles an hour from the northwest, and they're entering the airspace.’ That is a big focus.

Michel: In the U.S., as with pretty much every other country, there are systems to track regular air traffic airplanes that crisscross the skies. But drones don't show up on those systems.

Which is why Arthur told us earlier they’re invisible.

Michel: (cont.) They don't have transponders they fly to be picked up on on radar systems. And for those asking, you know what, why don't they just shoot them down? Well, believe it or not β€”Β and this is totally true β€” if in either of these two cases, either in Palo Verde in Arizona with the nuclear plan too, [or] the Colorado drone sightings, if security officials had shot these drones down, that act of shooting the drones down may have in fact being more illegal than the very fact of flying a drone over these areas because we're clear [of] Federal Aviation regulations. You're not allowed to shoot a drone out of the sky unless you have a specific authority to do so. And that's only granted to a small number of federal level agencies β€” most relevant here, the FBI and the Department of Homeland Security. In the case of the Colorado drone flights: again, swarms of drones, some flying near sensitive facilities at night. Nobody knows what they're doing. Neither the FBI [n]or the Department of Homeland Security, according to documents that were released through a Freedom of Information request, actually deemed that these incidents rose to the level of a security threat that would merit them activating their anti-drone teams or their counter-UAS units. And so these drones were able to fly with, as I said earlier, total impunity.

By the way: The night shift guards at the Palo Verde nuclear plant? They have better things to do than worry about these non-violent drone episodes. That, anyway, is what we kind of directly take from some emails that came out of Douglas Johnson’s FOIA request that was the impetus for The Drive’s reporting in late July.Β 

Here’s one of those emails that cuts to the heart of the matter, at least as it stood last October. It comes from an official named Laura Pearson, who’s the chief of the NRC's Intelligence Liaison and Threat Assessment Branch. She’s writing to an official named Silas Kennedy from the NRC's Office of Nuclear Security. And she’s writing the very next morning after those back-to-back episodes.Β 

β€œSilas, can we temporarily halt calls to the ILTAB duty officer about drones flying over this NPP in the wee hours, before we are able to make a permanent change to the procedures? There is nothing ILTAB can do about it at night, and if my staff has to be woken up about it each night, it will start to cause other problems for us. We have a small staff and having people out or late because they are not getting adequate sleep will impact our ability to get work done.”

That was the email. So, whoever’s flying these drones over here, we’ll get to it in the morning. Stop bothering us at night. Thank you very much.Β 

Watson: According to Mexico news daily, a citizens militia group This is Mexico after all, as citizens militia group, found two drones inside an armored car that cartel hitmen had abandoned after attempted raid on a city β€”Β four β€” was in these drones past with ball bearings and is supposed to be shrapnel it was like in Tupperware containers, there was a remote detonation system duct taped to these things. Now, there's another element to the story that is, I think, temporarily comforting. But it's also pretty disturbing in the long term. And that's basically that they haven't killed anybody in this specific way yet. And locals think maybe they actually don't really know what they're doing. Like that the desire and the intent is a bit too far ahead of the available skill and talent. I don't see that lasting much longer at all because it didn't take me long to get a handle on the control on my own personal drone.Β 

Velicovich: Right when you're flying these drones, especially these off-the-shelf ones they're meant literally for children to be able to open up in a box and fly up in the air. And we're not talking about some drone you get for Christmas that barely stabilizes itself and slams into the wall and crashes down and you need to buy a new one. We're talking about drones that know how to operate autonomously. They can carry payloads. They're being used for a wide variety of different functions. And when I think about it from a drug cartel perspective, why wouldn't they be using something like this? Why wouldn't they be using these these tools not only to smuggle drugs across the border, but also use them for potential assassination operations? I mean, the fact is, they are learning from us operations. I mean, the U.S. government has the best unmanned aircraft on the planet. But it's shifting in terms of being able to do some of the same things with the consumer world to where you can strap a grenade onto an off-the-shelf drone that might have a one pound payload capability. I remember even writing an op ed for Defense One before talking about how easy it was to do that. And you think the drug cartels aren't learning from groups like ISIS, who were dropping bombs over you? As soldiers as they were clearing through Mosul, they're absolutely using it and look at even some of the recent assassination attempts of Venezuela on Maduro β€” there was an assassination attempt on a state figure utilizing a drone, and they barely missed him! So I personally think we're going to start seeing more and more of this as the drones become more and more capable. And people can get their hands on these things.

Michel: In the last few years, you know that there has been steady and consistent improvements β€” in some cases, very notable notable improvements. And what has improved is, well, the senses are better. And as a result, in a very small number of years that market of options in the counter-drone space exploded from just a handful of different companies offering a handful of products in the sort of mid-2010sΒ to now. Most recently, Dan Gedding did a second edition of that report on counter-drone systems and over 500 products [are] on sale, not all of which work; some of which, frankly, in all likelihood, are snake oil. But there has been definitely a trend towards separating the wheat from the chaff, if you will, and certain products and techniques rising to the top, if you will. One example of that is the [U.S.] Army a couple of years ago noticing that there was this sort of wild west environment with regards to counter-drones and different units β€” just buying whatever sort of seemed to have some likelihood of working β€” created an office that would consolidate and centralize these efforts. And they started off looking at something like 40 different products, I mean, all across the board. And just this summer, announced that they had narrowed that down to I think, seven or eight options. So that really shows that again the groups that are interested in using these technologies are gaining some experience and some sense of really what works and what doesn't work. Because the technology that they're up against, that is β€” drones are not standing still; drones are getting much more capable; they're getting smaller, which means they're harder to detect; they're getting faster, which means that you have much less time to respond to a drone that's coming at you. I mean, it seems pretty soon that you know, commercial sort of hobby-like drones will break the 300 kilometers per hour barrier. And if you're talking about a counter-drone system that has a detection range of say two kilometers, so you can spot a drone that's two kilometers away (that's about 1.2 miles), you'll have about 20 seconds off detecting the drought to know what to do about it. These things will come at you fast. They're able to fly along waypoints; they're able to fly and sort of pre-programmed flight patterns. And their communication systems are getting more robust. And incidentally, it's advances like that that are making life much harder for those wishing to shoot drones down. Because, you know, if you have a drone with better autonomy or better links, you know, you can jam it with a radio frequency jammer or with a GPS jammer. And those same advances are actually intended to make drones safer. And for most people who use drones, they make drones a much safer technology. But of course, it's gonna just create new headaches for the counter-drone space. And one last thing I'll say on the counter-drone element is swarms. Right? Just having multiple drones creates a security challenge that is an order of magnitude more complex to deal with from a sort of counter-drone perspective, because it's gonna be much harder to detect all the systems they may come at you from all different angles. And depending on the technique you have for intercepting the drones, you know, you may, you may be outgunned, as they say; all of which is going to get easier to do from the drone operators perspective. And the counter-drone systems are going to have to catch up. It's not a rosy picture, but I think it's a realistic picture of where we are in the counter-drone space.

Velicovich: And the fact is, the tech is getting lighter in weight, the range is extending. You know, you've now got these counter-UAV systems that are man-portable that can fit in a soldier's backpack and get the same distance of detection for a drone that would have used to be only able for a hardened installation, a long term installation with massive towers and massive satellites. Now, again, it’s more manpower. And you're seeing a lot more of that. But especially in the U.S., sort of this hard-kill ability is restricted in a lot of ways. So it's come down primarily recently to just being able to monitor and listen.

And that leaves basically what those guards did at Palo Verde: Stand there and watch. And maybe you can trace where they came from. Hopefully. Because otherwise, some of the same fairly dangerous problems apply that we brought up in episode 21: If you try to knock a drone out of the sky, there are many many things that could happen that are dangerous, and maybe even more dangerous than it just flying around in the air above us.

Velicovich: (cont.) If some counter-UAS device is knocking down the communications of a drone, what does that disrupt around us? Does that disrupt your cell phones? Is that disrupted Wi Fi? And so there's this concern that the technology is ahead of its time and regulators aren't there to really use it. And so you see, these companies want to know, β€˜IIs there a threat in out there in the first place?’ And so they're setting up these devices around banks and airports and other facilities where we're literally they're just analyzing the traffic. And I think they're finding that it's a lot more dangerous than they thought when they get a ping from a drone that at 5pm, every day hovering over this one particular area. Or five drones are coming at this speed every weekend over this airport and entering class B airspace. This is the analysis that I do think people need to understand the threat is really out there. And I'm seeing a lot more of that.

Watson: Okay, so we're speaking on September 1, and that's the day after Amazon received US government approval for drone deliveries. Of course, Amazon shares spiked to record highs on the news. You're in the consumer drone business. What's this mean for your world?

Velicovich: I love this because only organizations like Amazon and Google and the ups of the world have the budgets to push the envelope. And I love to hear that they're getting approval to do these drone deliveries because in the end, that's that's the future. But at the same time, there are limitations to being able to fully realize the true potential of drone technology for drone deliveries. Because we have so many regulations that are in place right now. And I keep going back to that we're seeing in other countries, drones already doing deliveries. And Amazon had to go to the UK for a while to even test their delivery system because they weren't even able to do it in the US. And I go back to talking about the counter drone technologies that exist. When you hear the word counter drone, you just think, Oh, I'm against drones in an airspace now? Well, it's the contrary. Counter-drone technology actually plays into Amazon and Google's drone delivery systems because it allows us to be able to identify what drones are good, and which ones are bad in the airspace so an Amazon delivery drone that's flying over your neighborhood that's flying in a particular corridor, there needs to be a system that says yes, that drone is allowed to be there and it's delivering a package and it's not mine. in your backyard, it is going from point A to point B. Counter-UAS technology allows drone delivery operations to work because any other drone that may be in that airspace that's not supposed to be there can then be stopped, can be geofenced off. And so as we really truly realize the potential of what these drone deliveries can do, we also have to increase our ability to allow these other technologies to exist in parallel, and that's really where the drone industry is going to take off.Β 


From Russia, by land and sea

Watson: Samuel Bendet is an analyst with the Center for Naval Analysis International Affairs Group, where he's a member of the Russia Studies Program. He's also an adjunct senior fellow at the Center for New American Security. His work focuses on, among many things, Russian defense and security technology. And that includes unmanned systems. Sam, welcome to Defense One Radio.

Bendett: Great to be here. Thanks for having me on.

Watson: Very good. So you've been tracking developments and unmanned systems for quite a while. Your Twitter feed is full of Russian robots. So I want to first ask, what is the most unusual, or, I guess if you prefer the most significant unmanned system that you've seen from basically any military over the past several years?

Bendett: Well, this past week, Russia hosted a massive defence Expo called Army 2020. It's an annual event. It draws a lot International participants, a lot of exhibitors, thousands of different military items and weapons are usually displayed. It's a big deal for the Russians. And of course, they didn't spare any expenses. One of the most important developments there was a combat UAV made by the Kronshtadt Group and its called Grom (Thunder). It is basically a very close relative to aerial combat vehicle that will accompany manned aircraft into battle. It looks the same way. So this shows us that Russians are thinking along similar lines [to the U.S.] when it comes to manned unmanned teaming and the role of combat UAVs for today and in the near future. But there are also many other developments because Russians specifically are thinking of unmanned military systems as a technology that safeguards human lives β€” such as soldiers β€” and makes missions more effective again, along similar lines with the U.S. approach as well. And so they're creating a very large lineup of all kinds of unmanned autonomous military systems, aerial, maritime, as well as ground systems.

Watson: I was also a kind of additionally curious if there's a particular unmanned system [that] its complexity sort of reveals the ambition or almost technical limitations of the whole unmanned systems arms race up to this point?

Bendett: So there's one in the maritime system called Vityaz-D that Russians have used for exploration of the Mariana Trench, and they carried out that exploration this spring. So they've developed his deep diving, unmanned underwater vehicle that can descend at up to 11 kilometers deep. So it's a deep-diving vessel. It has potential military applications, because the Ministry of Defense is now interested in acquiring this for the Russian Navy. And we have to wonder about the mission requirements and the mission compliment for this particular UUV. A lot of unmanned maritime systems are designed to operate close to the surface. This one can go very deep, it can go into very dark places. And we have to sort of wonder what the criteria is for its use at this point.

Watson: Sure, yeah. Sticking with this maritime domain, which I really didn't even expect kind of this episode to get into. But as we got into the broad topic, I realize how robust It really is. What is this thing that I believe is called the Cyber Boat-330?

Bendett: Yes, well, that's a very interesting project. So this unmanned surface vehicle was displayed at the same Army 2020 show that I recently mentioned. And a lot of these projects, a lot of these unmanned systems are actually self initiated by a lot of Russian companies, organizations, universities, seeking to develop and sell their products to the Ministry of Defense. And so this was apparently another self-initiated project, a non military project and the developer at the minute that this unmanned surface vehicle was built on the specifications delivered to them by the Iranians. So Iranians basically placed an order for a concept demonstrator, and Russians have delivered and as far as I know, this is the first such project between Russia and Iran on developing an unmanned system with military applications. Of course, we know that Iran is interested in unmanned systems writ large. They're building a lot of unmanned aerial vehicles, including combat UAVs that even Russians don't have at this point. But they're also interested in applying unmanned systems in the maritime domain. We know that their allies the Houthis in Yemen have used unmanned boats to target their adversarial maritime assets. So this boat displayed at the Russian show was built specifically for much shallower Caspian Sea and for the Caspian Sea operations. Again, it's a self initiated project, and the developer hinted that they would like the MOD, the Russian military, to be first in line to acquire it. So we know it wasn't ordered by the military. But again, it was displayed as a self-initiated project, with the hopes of the Russian military getting interested in it's further application and possible use.

Watson: Fascinating I, I had, of course known about the Houthis. And their so called drone boats, often remote control boats from what I understand bomb-laden, remote control boats. I was looking at these particular, you know, pieces of hardware up before you and I talked and I didn't realize until doing that, that the Saudi led coalition in Yemen blew another one up on Sunday. So these things are still happening. It's not like this has stopped. You know it it would seem to me, however, that in the aquatic sphere, no one seems to quite hold a candle to the Russians nuclear-armed drone submarine. I'm wondering what you can tell us about how tested and reliable this very deliberately frightening system is actually known to be.

Bendett: Well, we don't have a lot of information about the actual testing yet we know that there are statements about the development of the drone carrier β€” a very large nuclear submarine that is supposed to be tested this year, another submarine that will carry it, the Poseidon, which is the name of this nuclear drone will be tested next year. It is a very interesting development because the Russians claim that he will be able to quietly traverse large distances underwater, and then suddenly strike enemy aircraft groups or blow up near the shore causing a tsunami. But the thinking about underwater passage right now or traveling underwater is you can go fast and make a lot of noise or you can be quiet and move very slowly. You cannot do both. That's the conventional thinking right now. And the real question is, as the Russians claim that this drone will be able to travel across large distances undetected, whether they have actually solved that problem; or if it's if it's just something they're kind of using to talk up the system to further raise concern in the West and other countries.

Watson: It couldn't possibly be just a scare tactic.

Bendett: No way at all.

Watson: Of course. So, you're in this whole unmanned systems policy, you're steeped in this stuff? Is there room for improvement, either here in the US, or internationally?Β 

Bendett: A lot of countries are developing unmanned systems of all kinds. Well-off countries, developed countries, countries that don't have a lot of military budgets. That technology has spread far and wide. But as more and more countries as more and more actors are starting to use such systems, the question of the ethics of their use comes into play. And so we're starting to see ethics questions raised in Russia, for example, as the military and the developers and the end users are debating the principle of allowing for eventually allowing autonomous military systems to make their own decision and striking the target. Right now, for example, the Russians are talking about a β€œhuman in the loop” as an ironclad rule in developing and using such systems. But they're also discussing a greater and greater autonomy for their unmanned military systems writ large, and that means a diminished human role. A diminished human role, by definition, means that such autonomous systems will have to make a lot of decisions on their own. And so in developing combat military systems, Russians are finally starting to talk extensively about the ethics of using such systems in striking targets: can these military robots in fact make their own decisions? Where does the human stand here? And this discussion isn’t new because it's been going on for years at various international forums in various organizations trying to figure out as they try to figure out, how can these autonomous weapon systems be used and whether they should be used in the first place? So what we have is a tendency to develop and use more and more of these systems. Of course, right now they're all remote control, with a human firmly in control. But in the future, as the development of such military robotics becomes more widespread and becomes cheaper and as systems become more and more advanced, where does the human stand here? So I think that's a question worth exploring right away as every major military is starting to debate this topic.


Beyond AlphaDogfight

β€œ...where does the human stand here?”

Last month, an American fighter pilot lost in a dogfight with an algorithm. And the pilot lost not once, but five times in a row.Β 

The contest was the finale of the U.S. military’s AlphaDogfight challenge, a DARPA-sponsored event that was intended to sort of see how close we are to quote β€œdeveloping effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight.”

Well it turns out we’re pretty freaking close.Β 

To be clear, an AI system defeated a human fighter pilot in another DARPA event four years ago. But, as my colleague Patrick Tucker reported in August, β€œThe DARPA simulation was arguably more significant as it pitched a variety of AI agents against one another and then against a human in a highly structured framework.”

So AI to AI; then AI vs. human.Β 

I’m gonna turn now back to my discussion with Arthur Holland Michel in Geneva.

Watson: Now, this happened in August. And it's kind of a big deal for the future of national security. That's what we're all about here at defense one, this AI one five times out of five times. So I'm curious, you know, how expected was that result from your vantage point?

Michel: This wasn't really a fair fight from the get go. This was an environment where AI systems have proven again and again and again to thrive. And that is the simulated environment. I mean, essentially what is, for all intents and purposes a video game environment you know, this is what AI really likes because they have what's called perfect information. And so AI has proven itself over and over again to do truly astonishing things in video games. But autonomy in motion, that is when you put this AI system out in the big bad real world. It’s a completely different beast. And it's, there is a major gap between mathematics of a simulation and the physics of the real world. You know, physics is hard and inconvenient; sensors don't always give you perfect information.”

And that’s an interesting point in this case of the AlphaDogfight.Β 

β€œ...physics is hard and inconvenient...”

Here’s Sam Bendett again.Β 

Bendett: β€œOne of the main principles about this AlphaDogfight was that the unmanned system didn't use safe measures in trying to attack a human pilot. In other words, it undertook a set of measures that a human pilot would never undertake. Because a human pilot would try to safeguard his or her own life and the aircraft itself than an unmanned aerial system in such a fight presented here, which is a different sort of equation. Here we have a system that is willing to be used in a kamikaze style, meaning it doesn't care if it is destroyed as long as the target is struck. And this is kind of the prevailing thinking right now the emergent prevailing thinking in the Russian military with the use of unmanned ground systems after testing out one combat vehicle in Syria called Uran-9, and after experiencing a set of setbacks and failures, the mood right now for the next decade or so, thinks that such military systems if they are in a combat role, should be expendable, and should be used in this sort of one off Kamikaze role, similar to this AI dogfight.

Watson: Sure, sure.

Bendett: In other words, an AI enabled unmanned military system isn't supposed to care about its own safety. It's only supposed to strike a target. So this is a very interesting development. And I'm actually trying to track what the Russians are thinking about this AI-Dogfight as well, because I think the world watch very closely about this next step and military combat evolution, whether that was something that is going to set the trend or whether that experiment was just kind of a one off a single experiment and other experiments where an unmanned system, autonomous system, AI-enabled system going against the human operator will be different.

Watson: Fascinating slash disturbing stuff.

Bendett: Indeed.

Watson: One more thing.Β 

Michel: Adversaries can be unpredictable, especially if they know that you have an AI system, and they are looking to sort of get around your AI system.”

That’s, of course, Arthur Holland Michel again.

Michel: I mean, to give us a sort of illustration of this, the AI that powers targeted advertising online seems truly, like otherworldly in terms of its ability to pinpoint our exact interest. But, you know, people's roombas still get stuck, in situations that wouldn't confound a toddler. Right? I mean, that I think is fairly good evidence of how translating simulation into reality can be a challenge. But, um, well in that regard it feels like just one more step… But there's also a reason that DARPA, an institution that tasks itself with dealing with addressing the most complex technological challenges, chose trust [and] this idea of trust, to be it's big, sort of its next big leap. And that is because it can be very, very hard to trust AI systems, especially AI systems like these that provide very little insight into what they will do next and why they do what they are doing. Now the program managers and the engineers are doing a very detailed post-mortem of the whole ace competition, but in just a little snippet of commentary right after the trials ended, the program manager said some of the AI systems β€” there are 11 different AI systems that competed over the course of the trials β€” he said that they behaved in kind of suspicious ways. Like maybe they were, in a sense, cheating. You know, they were given this goal and they were sort of finding shortcuts that wouldn't really be viable. Yeah, that wouldn't really be viable in a, you know, in a true physical environment. And as it were, you know, this question of sort of, what we call the black box system is what I'm working on next here at the United Nations.

Watson: The black box system: That sounds even more mysterious.

Michel: Yeah. So the black box, you know, when, if you have in computing, if you have a system that gives you very little insight into how it turns input β€” so you know data β€” into output (conclusions or maneuvers in the case of an AI dogfighting system) without giving you any insight into how it makes that conversion, we call [this] a β€˜blackbox’ system. And this is problematic in critical functions like war fighting, potentially, because you sort of want to know that a system will do what you expect it to do. And that it will do so for intelligible reasons that it won't just act in ways that, you know, don't make sense or it that it behaves what appears to be very successfully in testing, but that's because it's actually, you know, picking up on some quirk in the training data that will not apply in the real world. It's a massive subject that feels really fundamental to the growing discussion around military artificial intelligence because fundamentally, AI systems can be inherently unpredictable. You know, they may have really good performance, they get the other guy nine times out of 10. But on that 10th time, they don't just fail, they fail, you know, to quote one government study β€˜spectacularly.’ And you want to know why they failed the way they do. And so that study will come out in hopefully by the end of this month, and will hopefully give some insight into some of the considerations that go into these questions of bridging that gap between the sort of digital simulated world and the real world and hopefully some solution to address it.

Watson: Well you remind me how I hated proofs in geometry? And how I was like, it's obviously true. Why do I have to write out how and of course, as I got older, I realized the utility of proofs and as you get to a point where you're like teaching other people, you need to be able to replicate things and understand.Β 

Michel: That's, that's a perfect example. You want systems to show their work like a good shooting, because that way, you know that they're not just getting it right by a fluke.Β 

Watson: And it's, of course, was reassuring and we can increases our own trust, right? Because we can, of course, replicated this product of the Enlightenment where we get to do a thing and then do it again and know exactly why it happened.

Michel: Exactly. It all comes down to trust.