Fight Digital Authoritarianism by Giving People the Tools to Counter It
Defense leaders should not wait for the rest of the government to act.
The Defense Department has an opportunity to create fresh challenges for adversarial regimes such as Russia, China, and Iran, not by engaging or even preparing for armed conflict, but by investing in critical new technologies to enable global digital freedom.
Digital authoritarianism is the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations. Such tools track and censor internet activities, but they can also be used to restrict physical interactions—think of facial recognition and other technologies used to crack down on protests. Most broadly, technologies such as China’s social credit system can serve as a population-scale coercive mechanism.
Such tools clearly affect human rights, but they have also harmed U.S. national interests—and DOD interests in particular. Accordingly, while there is a broader ideological conflict at play and norms that should be shaped via diplomatic interactions, digital authoritarianism must be addressed as a technical challenge to be countered through innovation and technology.
The harms
The growing effectiveness of technologies that enable digital authoritarianism makes it more challenging for U.S. forces to operate in environments and nations where those capabilities are present. Some tools enable internet and physical-world surveillance that could degrade DOD operational security. Others enable domestic influence and control operations that can shore up public support for a revisionist regime and embolden it to conduct similar operations against American audiences.
In many cases, the underlying technologies are produced by the commercial market that serves free as well as authoritarian countries. So long as the technical needs of U.S. adversaries’ surveillance states and the commercial market align, the former will continue to benefit from billions of dollars in research and development by the latter.
Authoritarian countries also create powerful economic incentives to improve surveillance, censorship, and control technologies—call it the surveillance-industrial complex. China, for example, is taking a decentralized approach to the development of surveillance and control technologies, even creating nation-wide competitions to advance further modernization. Variants of technologies initially created to monitor minority populations in Xinjiang province are now deployed in other parts of the country. In addition, China is a major exporter of repressive technology to many countries around the world. All of these activities reflect how substantially the Chinese have prioritized population-scale control.
Such tools can be countered by fostering technologies to provide privacy, guarantee digital/internet freedom, and promote publicly available tools to counter influence campaigns. This notion has been present in the cyber or broader national security policies of the Obama, Trump, and Biden administrations, and the United States has been a member of the Dutch-initiated Freedom Online Coalition since 2011. However, despite more than a decade of such stated intentions, there is currently no U.S. department or agency responsible for leading whole-of-government efforts to directly counter digital authoritarianism, nor is there a comprehensive government research strategy to counter digital authoritarianism.
Defense leaders should not wait for the rest of the government to act. In response to this gap, they should prioritize countering digital authoritarianism by refocusing research and development and posturing to engage in the fight against this threat. They should aim to create new levers to persistently and directly engage its adversaries and strengthen its advantages on the digital battlefield. To this end, they should also begin to establish working relationships with other government and non-government organizations that are working to counter digital authoritarianism. These include the U.S. State Department, U.S. Agency for Global Media (specifically the Open Technology Fund), USAID, National Endowment for Democracy, and digital rights non-governmental organizations. None of these organizations currently performs long-term, much less high-risk, research, so the DOD could help foster the kinds of revolutionary technical capabilities that they are not able to develop or obtain on their own. At the same time, building these ties must proceed with care so that these organizations preserve their independence from the DOD. Finally, defense leaders should also build a new strategy to address this critical shortfall in DOD capability, including bringing efforts that are currently siloed into a larger strategic framework.
New tools
Information-related technologies that achieve U.S. national security goals by affecting digital authoritarian technologies can be binned within a spectrum of effects. The primary agency or organization that will use these capabilities may not be the DOD exclusively, but a mix of non-DOD and non-governmental partners. The potential effects include:
- Awareness effects to create knowledge of what threats may exist in a given environment and the effects and implications of those threats (e.g., specific risk to individuals or missions).
- Degradation effects to reduce the value of data collected or in the capability or capacity of data processing and/or exploitation.
- Denial effects to prevent the unwanted collection or exploitation of relevant data.
- Deception effects to enable the delivery or manipulation of data for misleading purposes.
- Defeat effects to neutralize a collection or processing capability or capacity.
Proposals for research and development efforts should specify what effects they aim to achieve and what ecosystem they would target. Is it a closely held adversary technology or part of the open ecosystem?
Needed as well are new tools to help understand what an adversary might be able to achieve with various tools and data. For example, this could be used to understand the implications of information gleaned from the OPM hack, adversary SIGINT or censorship, or establishment of data centers for population repression (e.g., Integrated Joint Operations Platform in Xingjian province). In a sense, this is complexity- and information-theoretic, but it may be that empirical benchmarks could be derived to aid in this effort. Complex adaptive system theory could also play a role. The bottom line is: We need a formal, rigorous framework to reason about large-scale surveillance and censorship.
Algorithms and software are needed to create information-processing capabilities to enable repressed populations to use information technology (particularly internet-based systems) even if the adversary controls various components of that technology (e.g., mobile devices, internet architecture components). Such tools must be easily usable if they are to be widely adopted. It would also be useful to better understand the precise relationship between distributed or decentralized computing technologies and concrete resilience against an active cyber attacker.
Other research should aim to discover and counter AI-enabled surveillance. This would likely start with current areas such as adversarial examples, and AI classifier poisoning attacks. It will need to focus on one or more information ecosystems, such as the internet and associated protocols and applications, media content (speech, text, etc.) on chat applications and websites, and biometric collection capabilities (e.g., for facial recognition, voice recognition, gait analysis, etc.).
Research should also be dedicated to create rapid counter-censorship messaging algorithms and software that discovers adversary-censored topics and modifies desired messages to remain uncensored, disrupting adversary censorship efforts.
The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Defense Department or the U.S. government.
Dr. Joshua Baron is a program manager in the Defense Advanced Research Project Agency’s Information Innovation Office, where he runs programs in cryptography, privacy, and anonymity. Before joining DARPA, Baron was an Information Scientist at the RAND Corporation, where he worked on cyberspace operations policy and emerging cybersecurity technology policy.