Great Powers Must Talk to Each Other About AI
Even as they compete, major militaries have reason to cooperate: to avoid misunderstanding and to establish best practices and pragmatic parameters.
Imagine an underwater drone armed with nuclear warheads and capable of operating autonomously. Now imagine that drone has lost its way and wandered into another state’s territorial waters.
A recipe for disaster? Perhaps. But science fiction? Sadly, no.
Russia aims to field just such a drone by 2027, CNBC reported last year, citing those familiar with a U.S. intelligence assessment. Known as Poseidon, the drone will be nuclear-armed and nuclear-powered.
While the dynamics of artificial intelligence and machine learning, or ML, research remain open and often collaborative, the military potential of AI has intensified competition among great powers. In particular, Chinese, Russian and American leaders hail AI as a strategic technology critical to future national competitiveness.
The military applications of artificial intelligence have generated exuberant expectations, including predictions that the advent of AI could disrupt the military balance and even change the very nature of warfare.
At times, the enthusiasm of military and political leaders appears to have outpaced their awareness of the potential risks and security concerns that could arise with the deployment of such nascent, relatively unproven technologies. In the quest to achieve comparative advantage, military powers could rush to deploy AI/ML-enabled systems that are unsafe, untested or unreliable.
As American strategy reorients toward strategic competition, critical considerations of surety, security and reliability around AI/ML applications should not be cast aside. Any coherent framework for U.S. strategy must include policies to promote American innovation and competitiveness, while deepening cooperation with allies and partners.
The reality of great power rivalry will entail sharper contestation on issues where U.S. values and interests directly conflict with those of Beijing and Moscow, but it equally requires constructive approaches to pursuing selective and pragmatic engagement on issues of mutual concern.
Even against the backdrop of strategic distrust, there are reasons for major militaries to cooperate on measures to improve the safety, surety, and security of AI systems in military affairs.
Policymakers will need to wrestle with difficult policy trade-offs, balancing potential benefits and possible costs. On the one hand, cross-military collaboration in AI safety and security can reduce the risk of accidents and strategic miscalculations among great powers. On the other hand, such collaboration may improve the reliability of these techniques and capabilities, thereby enabling strategic competitors to deploy AI/ML-enabled military systems more quickly and effectively.
A good place to start may be the development of common definitions and shared understanding of core concepts. American, Chinese, Russian and international policymakers and stakeholders should also pursue steps to improve transparency and promote mutual understanding of the factors influencing the design, development and deployment of AI/ML techniques for military purposes. Over time, these measures could create a foundation for collaborative initiatives to promote AI safety and security.
AI safety is a critical domain of research, a subject of active inquiry and expanding activities within industry and academia worldwide. However, this research is often poorly understood and under-resourced. U.S. policy initiatives to address AI safety and surety remain embryonic. Russia, meanwhile, has been moving ahead in experimenting with and even fielding unmanned, AI-enabled, and potentially autonomous weapons, including on the battlefield in Syria. The Chinese approach to AI safety and security appears to involve not only technical concerns about how to ensure the reliability and “controllability” of AI systems to reduce risk, but also concerns about the impact on social stability, which is distinct from the issues under consideration by democratic governments.
After improving conceptual understanding, policymakers should promote transparency and collaboration in AI safety and security research. Joint projects could review and share best practices based on current research and literature, while supporting research collaboration on select topics, such as standards for verifying and validating systems for autonomous vehicles.
The U.S. government should also promote and facilitate dialogues on concrete problems in AI safety and related security concerns among unofficial, non-governmental representatives (Track 2) and through dialogues that include a mix of official representatives and outside experts (Track 1.5). These dialogues can address such issues as reward hacking, robustness to shifts in context, scalable oversight mechanisms, and procedures for verification and validation.
Policymakers should build on and support efforts to develop best practices, common standards use cases, and shared methodologies of testing, evaluating, verifying and validating AI products and systems, including AI-enabled safety-critical systems. Beyond active initiatives in industry, constructive involvement from governments can address market failures and help bridge gaps between public and private initiatives.
Ultimately, discussion on critical issues of testing, evaluation, verification and validation could extend to security dialogues and future military-to-military engagements.
Militaries tend to evaluate each other’s intentions and capabilities in terms of worst-case possibilities. Given this reality, and the likely deficit of trust among great powers, there are reasons to consider integrating AI safety and security concerns into existing U.S.-China and U.S.-Russia strategic dialogues on cyber security, nuclear issues and strategic stability.
Related: Spies Like AI: The Future of Artificial Intelligence for the US Intelligence Community
Related: Putin Seeks to Plug Gaps in Russia's State-Driven Tech Efforts
Related: Don’t Call It an ‘Arms Race’: US-China AI Competition Is Not Winner-Takes-All
Ongoing dialogues, whether bilateral or multilateral, could mitigate risks and misperceptions. At a minimum, these conversations could contribute to a shared understanding of the risks of unintended engagements and escalatory consequences with greater autonomy and employment of AI/ML techniques.
If these early initiatives prove effective, policymakers could explore establishing channels to share AI research whose transfer and diffusion lessen the risks of unintentional use.
In some cases, it may be mutually beneficial to transfer technologies or techniques to prevent accidents — even to rivals or potential adversaries. During the Cold War, the United States developed and offered to share permissive action links as a cryptographic control to guard against unauthorized employment of nuclear weapons.
Today, a comparable undertaking could include efforts to define the types of AI research both countries would be willing to share and promulgate. Experts from the United States, China and Russia could explore improvements in AI safety and surety, such as failsafe mechanisms or supervisory algorithms. Of course, there is a risk that sharing these ideas could be one-sided or subject to exploitation, but initial exchanges on the topic could gauge the viability of this approach.
Pragmatic engagement on these core concerns of AI safety, security and stability must be informed by an understanding of past experiences and potential challenges.
Future progress will require a practical, results-oriented approach that convenes participants with the relevant range of expertise and experience. Dialogues and collaborative engagements need to be carefully structured and regularly evaluated for their results, while seeking to maximize reciprocity and symmetry in exchange.
This process must involve openly articulating urgent concerns and differences of opinion, including on issues of values and human rights. In particular, dialogues on AI ethics, safety and security in China need to address the Chinese government’s use of AI for censorship and surveillance, including the use of facial recognition to target Uighurs and increase state coercive capacity amid a brutal crackdown and crimes against humanity in Xinjiang.
In these engagements, participants and policymakers should also mitigate the risks of technology transfer and counterintelligence. For American participants, taking reasonable precautions and exercising awareness are paramount, especially when it comes to personal cybersecurity.
The U.S. government should ensure sufficient coordination across dialogues to enable shared situational awareness and promulgation of lessons learned over time. No single clearinghouse in the Department of State, Department of Defense, or elsewhere in the U.S. government appears to track and monitor these activities. As a consequence, the U.S. government may have limited visibility on what’s happening and where Track 2 efforts have a logical linkage to Track 1 initiatives.
Tighter feedback loops between Track 1 and Track 2 dialogues where appropriate would ensure clarity of objectives, information sharing and channels for actionable recommendations. This could include meetings and coordination among governmental and non-governmental stakeholders throughout the process
The stakes are too high to refrain from pursuing challenging conversations on AI safety and security. On such vital issues, pragmatic engagement means pursuing courses of action that can be productive and mutually beneficial, while mitigating the risks. Even, and especially, in the absence of trust, great powers should exercise greater agency in shaping the future of AI and responding to the dilemmas it poses for global security and stability.
NEXT STORY: We Can End Our Endless Wars