White House issues AI guidelines for national-security agencies
The new memo requires agencies to monitor, assess, and mitigate AI risks related to invasions of privacy, bias, and other human rights abuses.
President Joe Biden will issue a new national security memorandum today on artificial intelligence, aimed at helping the U.S. government deploy AI and retain its advantage over China.
The United States must “ensure that our national security agencies are adopting this technology in ways that align with our values,” a senior White House official told reporters ahead of the unveiling, adding that a failure to do so “could put us at risk of a strategic surprise by our rivals, such as China."
The memorandum provides guidance—via a framework—on how to employ AI for national security missions. “These requirements require agencies to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses,” according to a fact sheet from the White House.
The guidance will allow the government to take better advantage of the new AI tools coming out of Silicon Valley, the White House said.
“The innovation that's happened, particularly in this current wave of frontier artificial intelligence, has really been driven by the private sector, and it's critical that we continue to both foster that leadership,” the official said. “AI is rooted in the premise that capabilities generated by the transformer and large language model revolution in AI, often called frontier AI, are poised to shape geopolitical, military, and intelligence competition.”
There are still major concerns about applying those AI tools, like OpenAI’s ChatGPT, to high-stakes areas like national security, given the propensity of such models to hallucinate and produce false positives. And the publicly available data sets those models are largely trained on can contain personal (but legally obtainable) personal information on U.S. civilians, as Defense One has reported.
The national security community is “well aware” of the concerns, the official said. “We have to go through a process of accrediting systems. And that's not just for AI systems, but, you know, national security systems generally,” he said, pointing to Biden’s previous executive order on AI and the establishment of the AI Safety Institute as two steps the White House has taken to mitigate AI risks, particularly in government use.
The new memo designates the Commerce Department’s AI Safety Institute as “U.S. industry’s primary port of contact in the U.S. government,” according to the fact sheet. Those efforts join already existing Defense Department and Intelligence Community guidelines on AI development and deployment.
The official agreed that the availability of U.S. citizens’ data is a growing problem. “We have been very concerned about the ways in which Americans’ sensitive data can be sold, really through the front door, first collected in bulk, then sold through data brokers, and then end up in the hands of our adversaries. And so that's something that the President issued an executive order on [in February] to try to restrict adversary access to some of that data.”
But while the risks are real, the government still must establish a way for the national security community to “experiment” with AI through “pilots,” the official said.
“There are going to be challenges associated with adopting any new technology,” they said. “The framework…is one that's going to be continuously updated.”
And those updates may be subject to political disruption: Republican presidential candidate Donald Trump has vowed to repeal Biden’s executive order on AI safety.