Tech Companies Are Deleting Evidence of War Crimes
Algorithms that take down “terrorist” videos could hamstring efforts to bring human-rights abusers to justice.
If grisly images stay up on Facebook or YouTube long enough, self-appointed detectives around the world sometimes use them to reconstruct a crime scene. In July 2017, a video capturing the execution of 18 people appeared on Facebook. The clip opened with a half-dozen armed men presiding over several rows of detainees. Dressed in bright-orange jumpsuits and black hoods, the captives knelt in the gravel, hands tied behind their back. They never saw what was coming. The gunmen raised their weapons and fired, and the first row of victims crumpled to the earth. The executioners repeated this act four times, following the orders of a confident young man dressed in a black cap and camouflage trousers. If you slowed the video down frame by frame, you could see that his black T-shirt bore the logo of the Al-Saiqa Brigade, an elite unit of the Libyan National Army. That was clue No. 1: This happened in Libya.
Facebook took down the bloody video, whose source has yet to be conclusively determined, shortly after it surfaced. But it existed online long enough for copies to spread to other social-networking sites. Independently, human-rights activists, prosecutors, and other internet users in multiple countries scoured the clip for clues and soon established that the killings had occurred on the outskirts of Benghazi. The ringleader, these investigators concluded, was Mahmoud Mustafa Busayf al-Werfalli, an Al-Saiqa commander. Within a month, the International Criminal Court had charged Werfalli with the murder of 33 people in seven separate incidents—from June 2016 to the July 2017 killings that landed on Facebook. In the ICC arrest warrant, prosecutors relied heavily on digital evidence collected from social-media sites.
Werfalli has thus far evaded justice. But human-rights activists still hail the case as a breakthrough for a powerful new tool: online open-source investigations. Even in no-go combat zones, war crimes and other abuses often leave behind an information trail. By piecing together information that becomes publicly accessible on social media and other sites, internet users can hold the perpetrators accountable—that is, unless algorithms developed by the tech giants expunge the evidence first.
Shortly after the Werfalli arrest warrant was issued, Hadi Al Khatib, a Syrian-born open-source investigator based in Berlin, noticed something that distressed him: User-generated videos depicting firsthand accounts from the war in Syria were vanishing from the internet by the thousands. Khatib is the founder of the Syrian Archive, a collective of activists that, since 2014, has been scouring for digital materials posted by people left behind in Syria’s war zone. The Syrian Archive’s aim is “to build a kind of visual documentation relating to human-rights violations and other crimes committed by all sides during the eight-year-old conflict,” Khatib said in an interview.
Related: In Win for Bolton, International Criminal Court Will Not Prosecute US Troops
Related: The Logic of Assad’s Brutality
Related: I Ran the Air War Over Gaddafi. Here’s Why The US Should Stop Backing the Yemen War
In the late summer of 2017, Khatib and his colleagues were systematically building a case against the regime of Bashar al-Assad in much the same way ICC investigators pursued Werfalli. They had amassed scores and scores of citizens’ accounts, including video and photos that purportedly showed Assad was targeting hospitals and medical clinics in bombing campaigns. “We were collecting, archiving, and geolocating evidence, doing all sorts of verification for the case,” Khatib recalled. “Then one day we noticed that all the videos that we had been going through, all of a sudden, all of them were gone.”
It wasn’t a sophisticated hack attack by pro-Assad forces that wiped out their work. It was the ruthlessly efficient work of machine-learning algorithms deployed by social networks, particularly YouTube and Facebook.
With some reluctance, technology companies in Silicon Valley have taken on the role of prosecutors, judges, and juries in decisions about which words and images should be banished from the public’s sight. Lately, tech companies have become almost as skilled at muzzling speech as they are at enabling it. This hasn’t gone unnoticed by government entities that are keen to transform social networks into listening posts. Government, in effect, is “subcontracting” social-media platforms to be its eyes and ears on all kinds of content it deems objectionable, says Fionnuala Ní Aoláin, a law professor and special rapporteur for the United Nations Human Rights Council.
But some of what governments ask tech companies to do, such as suppressing violent content, cuts against other legitimate goals, such as bringing warlords and dictators to justice. Balancing these priorities is hard enough when humans are making judgments in accordance with established legal norms. In contrast, tech giants operate largely in the dark. They are governed by opaque terms-of-service policies that, more and more, are enforced by artificial-intelligence tools developed in-house with little to no input from the public. “We don’t even know what goes into the algorithms, what kind of in-built biases and structures there are,” Ní Aoláin said in an interview.
For years, social networks relied on users to flag objectionable content, all manner of hate speech, and calls to arms that, among other things, espoused violence. But as this content continued to fill up the fringes and spill into clear sight, pressure mounted on Facebook, YouTube, Twitter, and other popular social networks to automate the cleanup. They turned to machine learning, a powerful subset of artificial intelligence that can make sense of huge amounts of data with little to no oversight from human minders.
Designed to identify and take down content posted by “extremists”—“extremists” as defined by software engineers—machine-learning software has become a potent catch-and-kill tool to keep the world’s largest social networks remarkably more sanitized places than they were just a year ago. Google and Facebook break out the numbers in their quarterly transparency reports. Facebook removed 15 million pieces of content it deemed “terrorist propaganda” from October 2017 to September 2018. In the third quarter of 2018, machines performed 99.5 percent of Facebook’s “terrorist content” takedowns. Just 0.5 percent of the purged material was reported by users first.
Those statistics are deeply troubling to open-source investigators, who complain that the machine-learning tools are black boxes. Few people, if any, in the human-rights world know how they’re programmed. Are these AI-powered vacuum cleaners able to discern that a video from Syria, Yemen, or Libya might be a valuable piece of evidence, something someone risked his or her life to post, and therefore worth preserving? YouTube, for one, says it’s working with human-rights experts to fine-tune its take-down procedures. But deeper discussions about the technology involved are rare.
“Companies are very loath to let civil society talk directly to engineers,” says Dia Kayyali, a technology-advocacy program manager at Witness, a human-rights organization that works with Khatib and the Syrian Archive. “It’s something that I’ve pushed for. A lot.”
These concerns are being drowned out by a counterargument, this one from governments, that tech companies should clamp down harder. Authoritarian countries routinely impose social-media blackouts during national crises, as Sri Lanka did after the Easter-morning terror bombings and as Venezuela did during the May 1 uprising. But politicians in healthy democracies are pressing social networks for round-the-clock controls in an effort to protect impressionable minds from violent content that could radicalize them. If these platforms fail to comply, they could face hefty fines and even jail time for their executives. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron intend to up the ante at a summit next week calling on tech execs and world leaders to band together to eliminate the publication of extremist online content. After the March 15 mosque massacre in Christchurch, New Zealand, was streamed live on Facebook, countries including New Zealand, Australia, and the United Kingdom passed or proposed comprehensive new online-terror laws.
A proposed European Union law has been in the works for months. It would require technology companies to pull down harmful user-generated material—whether words or images—that “incites or solicits the commission or contribution of terrorist offenses, or promotes the participation in activities of a terrorist group.” That standard is extraordinarily broad. But if the companies don’t eliminate such posts within one hour, they face fines of up to 4 percent of global revenues.
Human-rights advocates worry about the decisions tech giants and their algorithms will make under such outside pressure. “The danger is that governments will often get the balance wrong,” argued Ní Aoláin. “But actually we have the methods and means to challenge governments when they do so. But private entities? We don’t have the legal processes. These are private companies. And the legal basis upon which they regulate their relationships with their users, whether they’re in conflict zones or not, is determined by [the company’s] terms of service. It’s neither transparent nor fair. Your recourse is quite limited.”
In July, she wrote an open letter to Facebook’s founder, Mark Zuckerberg, finding fault with how Facebook defines terrorism-related content, a key determination in what it decides to flag and take down. From what Ní Aoláin can tell, “they just came up with a definition for terrorism that bears no relationship to the global definition agreed by states, which I think is a very dangerous precedent. I made that very clear in my communications with them.”
When I asked Facebook to comment on Ní Aoláin’s complaint, a company spokesperson shared detailed minutes from a December content-standards forum. The minutes are a remarkable document, one that underscores the complexity of the judgments tech companies are being asked to make as they seek to monetize human interactions on a global scale. Is a terrorist organization one that “engages in premeditated acts of violence against persons or property,” or should the definition expand to include any non-state group that “engages in or advocates and lends substantial support” to “purposive and planned acts of violence”? “It would shock me,” one person at the meeting commented, “if in a year we don’t come back and say we need to refine this definition again.” (A company spokesperson said recently that there’s no update on the matter to announce.)
How the tech giants’ algorithms will implement these subtle standards is an open question. But a new crop of anti-terrorism bills, post-Christchurch, will thrust technology companies into an even more assertive enforcement role. Under the threat of massive fines, tech giants are likely to invest more in aggressive machine-learning content filters to suppress potentially objectionable material. All this will have a chilling effect on those who are trying to expose wrongdoing in war zones.
Khatib, at the Syrian Archive, said the rise of machine-learning algorithms has made his job far more difficult in recent months. But the push for more filters continues. (As a Brussels-based digital-rights lobbyist in a separate conversation deadpanned, “Filters are the new black, essentially.”) The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will see to it that nobody sees it. He fears the unintended consequences of such a law—that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”
“On the ground in Syria,” he continued, “Assad is doing everything he can to make sure the physical evidence [of potential human-rights violations] is destroyed, and the digital evidence, too. The combination of all this—the filters, the machine-learning algorithms, and new laws—will make it harder for us to document what’s happening in closed societies.” That, he fears, is what dictators want.
This article is part of “The Speech Wars,” a project supported by the Charles Koch Foundation, the Reporters Committee for the Freedom of the Press, and the Fetzer Institute.