Terrorists Are Going to Use Artificial Intelligence
Machine-learning technology is growing ever more accessible. Let’s not have a 9/11-style ‘failure of imagination’ about it.
There is a general tendency among counterterrorism analysts to understate rather than hyperbolize terrorists’ technological adaptations. In 2011 and 2012, most believed that the “Arab Spring” revolutions would marginalize jihadist movements. But within four years, jihadists had attracted a record number of foreign fighters to the Syrian battlefield, in part by using the same social media mobilization techniques that protesters had employed to challenge dictators like Zine El Abidine Ben Ali, Hosni Mubarak, and Muammar Qaddafi.
Militant groups later combined easy accessibility to operatives via social media with new advances in encryption to create the “virtual planner” model of terrorism. This model allows online operatives to provide the same offerings that were once the domain of physical networks, including recruitment, coordinating the target and timing of attacks, and even providing technical assistance on topics like bomb-making.
» Subscribe to our new, weekly podcast Defense One Radio! Episode 2 begins here.
Many analysts—and I fell prey to this error—brushed aside early concerns about the global diffusion of drone technology. The reason? We imagined that terrorists would use drones as we did, and believed that superior American airpower would blast theirs from the sky. But instead of trying to replicate the Predator, the Islamic State and other militant groups cleverly adapted smaller drones to their purposes. In the 2017 battle for Mosul, for example, the Islamic State (ISIS) dispatched small and agile consumer drones armed with grenades to harry the Iraqi forces assembled to retake the city.
These uses of social media, encryption, and drones illustrate a key pattern: As a consumer technology becomes widely available, terrorists will look for ways to adapt it. Artificial intelligence will almost certainly end up fitting into this pattern.
Like drones, AI will likely become much more widely available in commercial markets at reduced costs, and individuals will be able to modify and repurpose it. AI already enjoys diverse applications, from products like Apple’s Siri, to voice-to-text, to Facebook’s counter-extremism detection systems.
So how might terrorists use AI?
Perhaps they will start with social-network mapping. ISIS’s early battlefield victories were enabled, in part, by ex-Baathist intelligence operatives who mapped a city’s key players and power brokers, monitored their pattern of life—and then helped ISIS to arrest or kill them. Similarly, when North African ISIS operatives attacked the Tunisian town of Ben Gardane in March 2016, the available evidence—including the efficient way they assassinated key security officials—suggested that the militants had similarly worked to learn the human terrain in advance. Will social networks built using AI capabilities reduce the intelligence burden on militant groups and make it easier for them to conquer towns and cities?
What of the next generation of terror drones? Will they use AI-enabled swarming to become more powerful and deadlier? Or think bigger: will terrorists use self-driving vehicles for their next car bombs and ramming attacks? How about assassinations?
Max Tegmark’s book Life 3.0 notes the concern of UC Berkeley computer scientist Stuart Russell, who worries that the biggest winners from an AI arms race would be “small rogue states and non-state actors such as terrorists” who can access these weapons through the black market. Tegmark writes that after they are “mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone.” Would-be assassins could simply “upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.”
Thinking beyond trigger-pulling, artificial intelligence could boost a wide range of violent non-state actors’ criminal activities, including extortion and kidnapping, through the automation of social engineering attacks. The militant recruiters of the near-future may boost their online radicalization efforts with chatbots, which played a “small but strategic role” in shaping the Brexit vote.
The 9/11 Commission’s report famously devoted an entire section to discussing how the 9/11 attacks’ success in part represented a failure in imagination by authorities. In recent years, we have seen multiple failures in imagination as analysts tried to discern what terrorists will do with emerging technologies. A failure in imagination as artificial intelligence becomes cheaper and more widely available could be even costlier.
NEXT STORY: The Dead Metaphors of National Security