More than 7,000 NSA analysts are using generative AI tools, director says
The signals-intelligence agency has about 170 AI-related projects in the works, including 10 of the highest priority.
More than 7,000 National Security Agency analysts have begun using generative AI tools in the past year, the agency’s director said Tuesday.
These “piloted capabilities” are being used for “intelligence, cybersecurity, and business workflows,” Gen. Timothy Haugh said at an Intelligence and National Security Alliance event.
“The feedback we've received from the workforce has been overwhelmingly positive, helping our analysts work smarter and better,” Haugh said.
Intelligence agencies using artificial intelligence isn’t new, but the community has been more open with testing the technology in everyday work. The Department of Homeland Security’s intelligence shop has been experimenting with cloud-based AI tools for analysts and the Central Intelligence Agency has been building its own ChatGPT-like tool while grappling with troves of data.
NSA aims to home in on a few promising projects, Haugh said.
The agency has “over 170 different things that are going on from an AI project perspective. But there's really about 10 of those that we have to ensure, from a national-security perspective, are awesome. Those other 160, we want to create opportunities for people to experiment, leverage, [and] compliantly use. And some of those will become things that could be just as critical. But those 10, we gotta get right. Because those are the things that are gonna allow us to bring advantage to our nation,” he said.
The NSA broadened its advocacy for AI, including warning startups about their vulnerability to intellectual property theft, pushing for more government-wide adoption, and helping companies understand developing threats.
“It's important, however, to look at AI more than just as a means to enhance our capabilities,” the general said. “One of the ways we're going in great lengths at NSA to minimize the risk of generative AI is by establishing a robust AI governance process, guided by a chief responsibility AI officer, which will ensure that privacy rights are fully preserved, and any adoption of advanced AI occurs legally compliant, and secure.”
The agency’s nearly-year-old AI security center is starting to produce dividends, Haugh said, such as finding flaws in large language models, which can be used for translating languages.
“They're producing products that identify vulnerabilities in large language models, how large language models can be secured, [and] how large language models can be targeted by nation states. So we're putting out unclassified products that start that discussion with industry,” particularly smaller businesses, he said.
One of the center’s goals is to bring in smaller companies that have intellectual property “but maybe don't have the infrastructure that larger AI-focused companies have” and also advise the U.S. government on AI security, Haugh said. “Because we want to ensure that we are protecting any of the models we're using, the technology we're using, and ensuring we're doing that in a way that certainly is compliant and responsible.”
Later this year, the agency is planning a conference on managing high-impact AI applications in national security with the National Cryptologic Foundation.
“AI will have a direct impact on every aspect of near-future warfare. We must be sure that we can protect the warfighter, our most essential national security systems, and our daily infrastructure against adversaries who would leverage AI for their advantage,” Haugh said. “With tensions in the Pacific, ongoing conflict in Europe and our elections just around the corner. These are not theoretical discussions.”