Three Steps to Fight Online Disinformation and Extremism
The long-term solution recommended by experts is rarely discussed by pundits and politicians.
It is no understatement to say that the last few weeks have reshaped the landscape of information warfare. The online playing field, long tilted toward toxicity by algorithms and social-media executives devoted to keeping people clicking, has begun to be righted. The biggest superspreader of online lies has been deplatformed, tens of thousands of conspiracy theorists and extremists have seen their accounts purged, and the plug has been pulled on various hubs of racism like Parler. These actions altered the online back-and-forth between truth and lies, separating the eras of “before Jan. 6” and “after.”
Yet all this has merely halted our nation’s dangerous tumble down the rabbit hole of mis- and disinformation. Next must come short-, medium-, and long-term efforts to help us climb out — and make sure that we don’t fall back in.
Near-term: change the message
It took a violent insurrection seeking to overturn an election, but social-media companies finally moved to close the pathways of disinformation. Now they are being beaten up for doing the right thing.
While there are many valid concerns about these companies’ power, a great deal of the critique surrounding what they did is actually bad-faith efforts to rewrite the narrative. It is about arguing that somehow in the wake of five deaths and one of the most shameful episodes in American political history, the loss of a Twitter account turns them into the true “victims” of the story.
We must, of course, keep firms’ feet to the fire to ensure that they continue to face the problems on their networks. Already, extremists in the far right and conspiracy theorists like QAnon are shifting the way they message, change their use terms, etc., to try to sneak around detection and taking advantage of the leeway given to how certain groups are structured. But in this critique, it is also important to debunk the increasingly common and inaccurate claims that the recent actions have been Orwellian-style “censorship.”
Besides proving the rule that those who most frequently cite 1984 are the least apt to have read it, these claims reveal a misunderstanding or deliberate misportrayal of censorship in three key ways.
The first claim is that Trump was deplatformed for what he said about a stolen election. The reality was not merely the lies, but the link of the lies to multiple past and future threats of violence. While his repeated lies on the election got him suspended (and would have been enough to ban him based on the rules of the networks), the companies finally acted after he went one step too far in the wake of the Capitol takeover. He was permanently banned after he went back on his stilted video promising a return to normal presidential behavior and instead declared that he would not participate in Inaugural events and the broader “peaceful transition” as it is understood in our democratic norms and traditions. In the wake of the riot and facing multiple specific linked threats, Twitter determined that his new series of tweets suggested to his followers, including those who had planned violence and were planning more, that their cause remained just and their field of action was clear.
The second is a basic misunderstanding of what free speech is and isn’t. There is not nor has there ever been 100% free speech in our democracy, online or even in your own home. We as a society have decided that certain elements of speech violate our norms and laws, be they making child porn or inciting violence. The latter can take a form of what legal scholars call “dangerous speech”: something said by an influential speaker to a susceptible audience that makes mass violence more likely. Just as social-media firms have policed their networks for everything from child porn to pirated movies to terrorist beheadings, they are well within their rights to — and indeed should be defended when they choose to — eject dangerous speech from their platforms.
The final is an often deliberate misinterpretation of the differing powers and responsibilities of a business versus that of the government. Private entities have the choice to take (or not take) actions, even in public spaces, that government cannot. As a parallel way of thinking about it, in America, you have the right to buy and sell a gun or adult porn. But just because one has the right to do so, neither the nicer shopping malls nor Amazon.com must allow you to do so at their marketplace. They decide based not just on the law, but on the law of the free market, by what they believe would be good for their profit-margins and brand. Their ownership gives them great latitude on not just what sold, but even what is allowed to be said in their spaces. To continue that example, as an American, you can legally say an obscenity or opine on any product. But Amazon has determined that, on its network, you can’t curse or review products you haven’t bought.
Medium-term: sift the data
Social-media firms’ recent policy shifts are likely to work on far-right extremism the way they did on ISIS: not by completely eradicating them or pushing them offline, but forcing them into smaller and more covert places. The danger is still there, but the movement’s ability to recruit, coordinate, and drive events is drastically reduced.
There is a crucial difference, however, that comes from how we treated far-right extremism differently. ISIS was a known and accepted evil, while far-right extremism was given a free pass by our politics and law for too long. The bad news for us is that there is now a huge amount of “hide in plain sight” adherents to this ideology. The bad news for them is that much of what they thought was hidden is now out in the open.
Taking this challenge and opportunity to a new level is what was certainly the most epic fail in Nazi cybersecurity history, and maybe even all of cybersecurity history. When the various tech companies withdrew the underlying systems that let Parler operate, some 56 terabytes worth of data on its users made its way into the open, thanks to the site’s abysmal security and privacy practices.
The outcome is that while a key hub of extremists and conspiracy theorists was knocked offline, a huge swath of the things they posted is now online. Text, videos, photos— is now available in torrents that anyone can download, sometimes with associated identifiers including even GPS coordinates. Some it can be matched to military bases, police stations, individual patrol cars, even the specific person.
To be clear, holding a Parler account is neither illegal nor hardly itself “evidence of extremism and terrorist sympathies,” as some have wrongly claimed. So the issue is not whether someone was on Parler, but rather what they did there, including when they thought there would be no consequences. Groups that range from researchers to reporters to law enforcement are just now going through the reams of information to find out. Unsurprisingly, they are finding many awful things that go well beyond felony-class participation in the violent takeover of the U.S. Capitol. (Already, a site has published every face captured in the 827 videos of the Capitol riot posted on Parler.)
Perhaps even more important than the what is the who. Among those posting racist or violent things were multiple members of the police, military, and federal, state, and local government agencies. Their posts show betrayals of public trust, but also violations of professional codes, contracts, and even laws. Libertarianism arose around the same time as the law of physics, so it may be apt that many users of the ultra-libertarian network are about to experience a Newtonian moment: Actions have consequences.
Long-term: inoculate the system
For some years now, pundits and politicians have generally talked about “fixing” social media by changing either the legal code or software code that governs it. But when the Carnegie Endowment for International Peace looked at 85 proposals by 51 research and policy organizations, the most frequently recommended action was bolstering our education on how to use it.
Consumers of online information — that is, all of us — must gain the skills to “learn to discern” between real and fake, manipulative and authentic. Building such “digital literacy” or “cyber citizenship” skills is how we build more resilience in the system. In public-health terms, they inoculate the target against viral online threats. And, importantly, they are topic-agnostic. Such skills reduce our individual and societal vulnerability against everything from Russian info ops to COVID-19 anti-vaxxers.
Yet we have done far too little to instill these skills. Across America’s roughly 14,000 school systems, most schools lack digital literacy programs, while those that do have been largely on their own to find and pay for teaching tools that work. The incoming Biden administration must recognize that “fixing” social media, and all the ills that it is connected to, demands more than traditional tech policy approaches. Instead, national security policy and education policy now go hand in hand. To help teachers, kids, parents, as well as our nation’s future, we need federal backing to help create and support effective digital literacy programs.
Civil society shares some blame too. It is notable that the most recommended action is the part of the topic that is least supported by foundations and worked on by think tanks and universities. Just like the platforms needed to tilt the playing field, so too do those in the nonprofit sector need to right this imbalance.
In sum, the mess that our democracy finds itself in didn’t happen overnight. It played out over years and will take years to undo. Fortunately, like the Internet itself, the answers of what to do about it are all there out in the open.
P.W. Singer is Strategist at New America and the author of multiple books on technology and security, including Wired for War, Ghost Fleet, Burn-In, and LikeWar: The Weaponization of Social Media
NEXT STORY: Iran Will Still be a Slog