Vulnerabilities May Slow Air Force’s Adoption of Artificial Intelligence
More data on the battlefield means a wider attack surface, something the Defense Department has yet to prepare for, experts say.
The Air Force needs to better prepare to defend AI programs and algorithms from adversaries that may seek to corrupt training data, the service’s deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects said Wednesday.
“There’s an assumption that once we develop the AI, we have the algorithm, we have the training data, it's giving us whatever it is we want it to do, that there’s no risk. There’s no threat,” said Lt. Gen. Mary F. O’Brien, the Air Force’s deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations. That assumption could be costly to future operations.
Speaking at the Air Force Association’s Air, Space and Cyber conference, O’Brien said that while deployed AI is still in its infancy, the Air Force should prepare for the possibility of adversaries using the service’s own tools against the United States.
Contemplating and strategizing around adversarial use of one’s own AI tools is part of an emerging subcategory in artificial intelligence called AI safety, ensuring that deployed AI programs not only work as expected, but that they are safe from attack in terms of design, underlying data stream, and computer architecture. Current Defense Department efforts in this area are small at best.
O’Brien recounted how one Air Force intelligence officer, Rena DeHenre, completed a fellowship at MIT and returned to the Air Force eager to join whatever office was in charge of AI safety. “She said, what organization can I go to help defend our algorithms?” O’Brien recalled. “I said ‘Rena, that organization doesn’t exist in the Air Force. You’re it!’”
Earlier this month, DeHenre penned a short op-ed on the subject for online publication Over the Horizon. She argues that the Air Force should begin to take the same approach to AI safety as it does to other operations, getting seasoned operators to attack programs or tools to find vulnerabilities, just as an adversary would. This practice is called red-teaming.
“Addressing the vulnerabilities that DOD AI and [machine learning] algorithms have would be the main task of an AI Red Team,” she writes.
While the Defense Department is fond of contracting out work, DeHenre argues that it’s essential that any AI red team come from military ranks. While they may face technical and expertise hurdles at first, she said, they would eventually overcome them, and the capability to red team AI programs and projects would have large rewards for future operations.
“It is not hard to see a future where the selection for the DOD AI Red Team becomes similar to applying to the Air Force Weapons School or Junior Officer Cryptologic Career,” she said.
The very nature of the Pentagon’s ambitious plans for cross-domain command and control networking will raise the likelihood of data hacking, Edward Vasco, director of the Boise State University’s Institute for Pervasive Cybersecurity, said in a separate panel.
“Everytime that you take the data elements and expand them out and find even more and more telemetry data to make use of, the challenge that we end up with is that we create more and more data environments for our adversaries to potentially attack,” he said.
Those attacks, said Vasco, are “going to become more and more pervasive, more and more prevalent, especially as [advanced battle management system] and [Joint All-Domain Command and Control] get implemented out into a wider context. The amount of data is going to explode beyond anybody’s expectations. I’m not talking storage levels; I’m not talking access [or] API platform connectivity; I’m talking about the sheer collection of that data and what that enables our adversaries to do and to think about.”