Chinese Sub Commanders May Get AI Help for Decision-Making
But can a recent news report be taken at face value? A CNAS fellow unpacks the intersection of Chinese tech, messaging, and naval power.
What can we learn from a recent news report that China is seeking to develop a nuclear submarine with “AI-augmented brainpower” to give the PLA Navy an “upper hand in battle”?
A February 4 piece in the South China Morning Post quotes a “senior scientist involved with the programme” as saying there is a project underway to update the computer systems on PLAN nuclear submarines with an AI decision-support system with “its own thoughts” that would reduce commanding officers’ workload and mental burden. The article describes plans for AI to take on “thinking” functions on nuclear subs, which could include, at a basic level, interpreting and answering signals picked up by sonar, through the use of convolutional neural networks.
Given the sensitivity of such a project, it is notable that a researcher working on the program is apparently discussing these issues with an English-language Hong Kong-based newspaper owned by Chinese tech giant Alibaba. That alone suggests that powers-that-be in Beijing intend such a story to receive attention. The release of this information should be considered critically – and might even be characterized as either a deliberate, perhaps ‘deterrent’ signal of China’s advances and/or ‘technological propaganda’ that hypes and overstates current research and development. Necessarily, any analysis based on such sourcing is difficult to confirm – and must thus be caveated heavily.
Nonetheless, there is at least a basic consistency between the article as reported and the apparent direction of China’s pursuit of military applications of AI, which has emerged as a top priority in PLA defense innovation. In addition, certain known lines of Chinese effort do make this piece seem plausible, including advances in submarine development undertaken by the China Shipbuilding Industry Corporation, or CSIC. At a basic level, the application of machine learning to acoustic signal processing has been an active area of research in China for a number of years. As such, it seems feasible, and even unsurprising, that the PLA would look to use machine learning to help sub crews and their commanders interpret the scarcity and complexity of information available in the undersea domain. “In the past, the technology was too distant from application, but recently a lot of progress has been achieved,” one researcher at the Institute of Acoustics of the Chinese Academy of Sciences told the SCMP. “There seems to be hope around the corner.”
As China continues to develop more advanced nuclear-powered and nuclear-armed submarines, the PLAN will likely remain focused on such new concepts and capabilities for this force. For instance, according to Wu Chongjian (吴崇建), a chief submarine designer at CSIC, China’s next-generation conventional submarines could leverage quantum communications, quantum navigation, and intelligent unmanned vehicle technologies. Concurrently, the PLAN is also pursuing the development and deployment of unmanned underwater vehicles, such as the Sea Wing (海翼), which could support submarines engaged in military missions. In the future, the PLAN might seek to use UUVs in conjunction with submarines in an attempt to advance its anti-submarine warfare capabilities and shift the undersea balance. In this context, as the deep sea battlespace becomes even more complex and contested, the use of AI to support commanders for at least acoustic signal processing and underwater target recognition in the near term – and perhaps providing more direct decision support as the technology matures – seems to be a plausible, and perhaps quite impactful, application.
However, the potential existence of such a PLA program also raises critical questions. The SCMP article does not specify or clarify whether these future AI systems would be used only on nuclear-powered SSNs or also on nuclear-armed SSBNs, such as the Type 096 that is under development. Rather sensationally, the Chinese Academy of Sciences researcher quoted in the piece goes on to say, “If the [AI] system started to have its own way of thinking, we may have a runaway submarine with enough nuclear arsenals to destroy a continent.” Certainly, it is too soon to be alarmed that the PLA might intend to put “superintelligence” on nuclear subs or unleash ‘killer AI with nukes’ upon the world. However, this ambiguity raises the question of whether and under what conditions the PLA might decide to use AI in ISR or decision support systems that directly support its nuclear arsenal, whether those under the control of the PLA Rocket Force or its future SSBN fleet. The lack of transparency – and resulting uncertainties – are concerning, given the potential impact of AI on cyber, nuclear, and strategic stability.
Although there have also been concerns that the PLA – and other authoritarian militaries that are disinclined to trust human personnel – may choose to take humans entirely “out of the loop,” that does not seem especially likely in this scenario. It is true that PLA writings and statements on these issues do not display the visceral negative reaction that U.S. commanders seem to have to the notion of doing so. Certain PLA strategists have also speculated about the potential for a “singularity” on the future battlefield, a point at which the human mind simply cannot keep pace with the speed and complexity of combat, necessitating that AI agents take on greater responsibility in command. In this case, the unnamed researcher reportedly emphasized, “There must be a human hand on every critical post. This is for safety redundancy.” For the time being, keeping at least a basic level of human involvement seems to be most practical and effective option. However, that alone is not a guarantee of safety.
As the PLA seeks to use AI to improve its C4ISR capabilities, there is a risk that it may rely too heavily upon or overestimate the supposed superiority of machine intelligence and judgment over that of humans. Although highly automated systems might seem, at a superficial level, to promise to lessen the burdens upon commanders, past experience, including with the Patriot air and missile defense system, has demonstrated that such complex systems can, in fact, create greater challenges for their operators, necessitating nuanced understanding of their advantages and limitations, often through specialized training. In addition, the dynamic of “automation bias” can cause compromised decision-making when humans start to rely too heavily on automated systems, at the expense of their own judgment.
Inherently, the employment of AI on the future battlefield will create new and unexpected operational risks. Those likely include potential malfunction, adversarial interference, or unexpected emergent behaviors. The unnamed scientist quoted in this piece supposedly emphasized, “What the military cares most about is not fancy features. What they care most is the thing does not screw up amid the heat of a battle.” However, that may be much easier said than done, which raises questions of how and to what extent such systems would be tested for safety and assurance. At this stage in its development, AI remains brittle and very vulnerable to spoofing or manipulation. If the PLA chooses to introduce AI systems in to its conventional submarines, let alone nuclear, submarines, there will thus inevitably be not only new capabilities but also new risks. As major militaries start to rely more upon AI systems, this will also place a premium upon the development of “counter-AI” capabilities to disrupt them.
And even though the veracity of the SCMP’s account cannot be verified at this point, it is clear that the PLA is prioritizing pursuit of decision superiority through AI technologies. PLA strategists have recognized—particularly since AlphaGo’s defeat of Lee Sedol in the spring of 2016—that AI could confer a critical advantage through its ability to devise tactics and strategems that even the most talented humans cannot equal. In particular, AlphaGo triumphed over Lee Sedol – and later Chinese champion Ke Jie – through its capability to anticipate all potential options and trajectories in the game of Go, which PLA thinkers see as at least roughly analogous to warfare, and formulate moves that can be novel, even superior to those humans have invented in thousands of years of playing the game. Since then, AlphaZero has demonstrated even more astonishing capabilities, beating the original AlphaGo 100 to nothing. Although the battlefield is considerably more complex than the game, the PLA seems to aspire to create an ‘AlphaGo for warfare’ that might support commanders – or perhaps someday even replace them, some have speculated.
While the PLA’s pursuit of decision support systems is not new, the capabilities to develop “intelligentized” (智能化) command decision-making capabilities may advance considerably with today’s rapid progress in AI technologies. Indeed, the PLA’s pursuit of “intelligentized” command decision-making capabilities appears to be a high-level priority, even highlighted in an authoritative article authored by the Central Military Commission Joint Staff Department. Beyond submarines, the PLA also appears to be working on the development of systems to augment command decision-making and, at the tactical level, for the pilots of fighter jets. The use of AI to enhance ISR – whether of video and imagery intelligence, as in Project Maven, or in acoustic signal processing for submarines, as the PLA seems to intend – will be early and impactful applications. In the future, with the advent of AI in “intelligentized” (智能化) warfare, the capability to leverage AI-enabled support to command decision-making could become critical to achieving decision superiority and dominance, in the deep seas and beyond.
NEXT STORY: 'Russia Is Our Adversary'