DHS details how AI could amplify biological, chemical threats
While access to laboratory facilities is still a hurdle, a new report notes that cloud labs could allow the stealthy development of weapons components.
Artificial intelligence may help malicious actors develop chemical, biological, radiological and nuclear weapons—but also help defenders head them off, a new Department of Homeland Security report says.
“[K]nown limitations in existing U.S. biological and chemical security regulations and enforcement, when combined with increased use of AI tools, could increase the likelihood of both intentional and unintentional dangerous research outcomes that pose a risk to public health, economic security, or national security,” the agency's Countering Weapons of Mass Destruction Office says in the report.
Required by President Joe Biden’s October 2023 executive order on AI, the full report was released last week, two months after a related fact sheet.
The proliferation of publicly available AI tools could help malicious actors learn to make and deliver chemical and biological weapons. While access to laboratory facilities is still a hurdle, the report notes that cloud labs could allow the stealthy development of weapons components.
The CWMD office recommended that the U.S. develop guidance covering the "tactical exclusion and/or protection of sensitive chemical and biological data" from public training materials for large language models, as well as more oversight governing access to remote-controlled lab facilities.
The report also said that specific federal guidance is needed to govern how biological design tools and biological- and chemical-specific foundation models are used. This guidance would ideally include “granular release practices” for source code and specification for the weight calculations used to build a relevant language model.
More generally, the report seeks the development of consensus within U.S. government regulatory agencies on how to manage AI and machine-learning technologies, in particular as they intersect with chemical and biological research.
Other findings include incorporating “safe harbor” vulnerability reporting practices into organizational proceedings, practicing internal evaluation and red teaming efforts, cultivating a broader culture of responsibility among expert life science communities and responsibly investigating the benefits AI and machine learning could have in biological, chemical and nuclear contexts.
The report also envisions a role for AI in mitigating existing CBRN risks through threat detection and response, including via disease surveillance, diagnostics, “and many other applications the national security and public health communities have not identified.”
While the findings in this report are not enforceable mandates, DHS said that they will help the CWMD office shape policy and objectives.
“CWMD will explore how to operationalize the report’s recommendations through existing federal government coordination groups and associated efforts led by the White House,” a DHS spokesperson told Nextgov/FCW. “The Office will integrate AI analysis into established threat and risk assessments as well as into the planning and acquisition that it performs on behalf of federal, state, local, tribal and territorial partners.”