US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2
AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change.
As the Pentagon rapidly builds and adopts artificial intelligence tools, Deputy Defense Secretary Kathleen Hicks said military leaders increasingly are worried about a second-hand problem: AI safety.
AI safety broadly refers to making sure that artificial intelligence programs don’t wind up causing problems, no matter whether they were based on corrupted or incomplete data, were poorly designed, or were hacked by attackers.
AI safety is often seen as an afterthought as companies rush to build, sell, and adopt machine learning tools. But the Department of Defense is obligated to put a little more attention into the issue, Hicks said Monday at the Defense One Tech Summit.
“As you look at testing evaluation and validation and verification approaches, these are areas where we know—whether you're in the commercial sector, the government sector, and certainly if you look abroad, there is not a lot happening in terms of safety,” she said. “Here I think the department can be a leader. We've been a leader on the [adoption of AI ethical] principles, and I think we can continue to lead on AI by demonstrating that we have an approach that's worked for us.”
While multiple private companies have adopted AI ethics principles, the principles adopted by the Defense Department in 2020 were considerably more strict and detailed.
While Ai safety has yet to cause big headlines, the wide implementation of new machine learning programs and processes presents a rich attack surface for adversaries, according to Neil Serebryany, founder & CEO of AI safety company CalypsoAI. His company scans academic research papers, the dark web, and other sources to find threats to deployed AI programs. Its clients include the Air Force and Department of Homeland Security.
“Over the last five years, we’ve seen a more-than-5,000-percent rise in the number of new attacks discovered and new ways to break systems,” said Serebryany. Many of those attacks focus on the big data sources that feed AI algorithms. It’s “very hard for a data practitioner to know if they have been breached or have not been breached.”
A report out this month from Georgetown's Center for Security and Emerging Technology notes, "Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect."
The Defense Department is grappling with AI safety as it rushes to adopt tools in new ways. Within the next three months, the military will dispatch several teams across its combatant commands to determine how to integrate their data with the rest of the department, speed up AI deployment, and examine “how to bring AI and data to the tactical edge,” for U.S. troops, said Hicks.
“I think we have to have a cultural change where we're thinking about safety across all of our components. We're putting in place [verification and validation and testing and experimentation] approaches that can really ensure that we're getting the safest capabilities forward,” she said.
The Defense Department, she said, would look beyond just educating the technical workforce on safety issues and would also reach out to “everyone throughout the department.”