US-UK safety pact could shape the future of AI
Two research institutes will collaborate on AI safety tests, among other things.
A new UK-U.S. agreement on artificial-intelligence safety could buttress efforts to bring partner and allied nations into a broad agreement on AI risks. It follows Defense Department-led collaboration between the two countries on military AI ethics.
Signed on Monday, the memorandum of understanding commits the countries to work together to design safety tests for popular AI tools, a development that could pave the way for regulation that will shape the future AI industry across the world.
“As the countries strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe,” the UK said in its announcement on Tuesday.
Industry leaders at the signing ceremony included Demis Hassabis from Google’s DeepMind, Elon Musk, and OpenAI’s Sam Altman, who has been particularly outspoken about his belief that governments should more firmly regulate the nascent AI industry to boost consumer trustworthiness and ward off catastrophe.
The announcement comes as the Defense Department is also working to better understand the potential risks and rewards of generative AI models, both those that are publically available and ones that the military may create.
In recent months, the United States and the United Kingdom have both stood up research institutes to understand safety issues around AI. The Biden administration established the U.S. AI Safety Institute under the National Institute of Standards and Technology in February, about a month after the UK established their own.
Under the agreement, the institutes will work together to perform at least one joint test on a publicly available AI model and will “build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively,” the Commerce Department said in a statement on Monday.
The announcement follows November’s summit in the UK, where 28 countries and the European Union agreed to “support an internationally inclusive network of scientific research on frontier AI safety.” Of course such agreements are easy. Among the signatories were China, Israel, and Saudi Arabia, all of whom have been criticized for their data collection and use practices. However, the EU has been a leader in data-protection ethics. In March, the body published the AI Act, a policy package to “support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI,” the EU writes on their website.