Social-Media Companies Are Scanning for Potential Terrorists — Islamic Ones, Anyway
Big platforms like Facebook others have come a long way in detecting and preventing the spread of Islamic extremist content and tracking potential Muslim terrorists. Why aren’t they doing more about other kinds?
Following the politically motivated shooting in Pittsburgh and the mailing of pipe bombs to political officials and journalists across the country, public outcry has risen against social-media companies. Suspected pipe-bomber Cesar Sayoc and Pittsburgh shooting suspect Robert Bowers used various platforms to post content indicating their potential for ideological violence. Some have asked why social-media companies didn’t do more, sooner, to stop the threat.
It’s a question that Facebook, Twitter, YouTube, and others have faced before, going back to 2014 when the problem was content from extremists of a different sort: violent jihadist groups such as ISIS.
Since then, many social-media giants have developed technological and policy-based ways to help prevent extremist content from proliferating across their sites and even to help law enforcement better track potential violent actors. But those efforts were aimed at foreign Islamic extremists, not domestic threats.
Monika Bickert, Facebook’s head of product policy and counterterrorism, has helped her company move farther along in this regard than some others. What role did Facebook play in the events that unfolded last week? A limited one: Robert Bowers, the charged Pittsburgh shooter, kept his threats and violent posts to a relatively obscure right-wing platform called Gab. Soon after he joined Gab in January, he began to post and spread images and content threatening to Jews.
Cesar Sayoc had a Facebook profile that he used to advance conspiracy theories. He also threatened people on Twitter, such as political analyst Rochelle Ritchie. Ritchie reported the threats to the platform. Last weekend, the company apologized for failing to act sooner.
Although Sayoc had a small presence on Facebook, the company might still have had a lot of information about him. Facebook monitors extremist rhetoric and content—of the Islamist variety—on sites that aren’t Facebook. It employs contractors to watch extremist chat rooms and other places so they can be ready to identify and tag threatening language, images, and content on Facebook.
Facebook’s Erin Marie Saltman, a policy manager at Facebook who oversees counterterrorism efforts in Europe, Africa, and the Middle East, disclosed this at the GLOBSEC security summit in May.
“There are a lot of people in other parts of the world that are not Facebook and not government,” Saltman said. “They are intel providers that sit and squat on a lot of these other sites and they tell us, in as close to real time as possible, when bad content is being released and so we know about it as soon as possible. So when the [Abū Bakr al-Baghdadi] speech was released a little while ago, and it wasn’t in video form, just audio, we were able to hash[tag] it before it started hitting our site.”
Predicting potentially violent behavior requires as much digitally collected data as possible, precisely the sort of data that intel vendors watching sites like Gab might notice. But when Defense One asked Facebook representatives whether they monitor sites like Gab for such content—or potential indicators of violence—they declined to say.
“As Erin mentioned, we work with intel and research firms who monitor many platforms, but we prefer not to disclose further details as bad actors actively work to circumvent our detection techniques,” a Facebook spokesperson said. “Since the bombing attempts, and the shooting in Pittsburgh, teams across our company have been monitoring developments in real time to understand both situations and how they relate to content on our site," they added.
In 2011, J. Reid Meloy, a forensic psychologist and consultant to FBI’s Behavioral Analysis Units at Quantico, identified eight behaviors that can predict lone-wolf attacks based on ideological extremism. Sayoc and Bowers exhibited several of them across multiple social media sites. If social-media companies could search for these subtle indicators of a potentially dangerous person, behaviors, such as fixation or obsession, in the context of overtly troubling posts and comments such as direct threats, patterns could emerge to predict an individual’s behavior.
Cross-platform analysis of individuals' data residue is what contemporary microtargeting for advertisements is based on. It works to predict whether a person might be open to a specific product pitch but it also works to predict potentially harmful behavior. Facebook is already using AI to spot suicidal tendencies signaled by text patterns. The same algorithms could be applied to spot violent extremism, as could network analysis and even semantic text analysis. That information, coupled with the identification of violent messages or threats spread on other sites, could go a long way toward predicting and preventing violent behavior and the posting of extremist content. And it is, but it’s mostly violent Islamic behavior and content.
Consider the case of Demetrius Nathaniel Pitts, a Cleveland man recently charged with plotting a jihadist-inspired terrorist attack. Authorities monitored Pitts's Facebook posts carefully after he commented on a photo of an al-Qaida training camp. His posts extorted Muslims to learn how to operate firearms, posts that law enforcement officials described as "disturbing" to USA Today. But pages urging non-Muslims (or people who are not explicitly Muslim) to own and practice with firearms are common on Facebook.
"We continually enforce our Community Standards through a combination of technology, reports from our community, and human review. This includes our hate speech policy that prohibits content that attacks people based on their race, ethnicity, national origin, religious affiliations, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability," said the spokesperson.
Following the public outcry against the proliferation of jihadist extremist messaging, Facebook and other sites tried a technique called hashing: essentially, to mark Islamic extremist content as individuals tried to spread it from one site to another. In 2016, Facebook executives led an effort to share data on hashed images across platforms.
“It creates the equivalent of a digital fingerprint so you can know when these things are coming up. We encourage that type of sharing, the hash sharing. Anybody using types of video, photo matching, would be able to use the hashes we are trying to share,” said Saltman.
Could hashed images and data from accounts like Bowers’s and Sayoc’s be relevant to law enforcement? Potentially, but the practice of hash sharing doesn’t involve the government, said Saltman. Instead, she said, the goal was to make a “safe tech space” for technology platforms to use whatever tools they saw fit.
“This is a by-industry, for-industry effort; it doesn’t include government or NGOs. It’s really so we can create a safe space so that some of these smaller platforms that are really scared about talking outside of industry—and admitting you have a problem is step one—can come together in a safe tech space and start operationalizing around some of this.”
In a conversation with New York Times reporters on Sunday, Gab founder Andrew Torba denied that he or any Gab employer should monitor content on the site. “Twitter and other platforms police ‘hate speech’ as long as it isn’t against President Trump, white people, Christians, or minorities who have walked away from the Democratic Party,” he wrote. “This double standard does not exist on Gab.”