Fight Deepfakes with Cyberweapons and Sanctions, Experts Tell Congress
Social media companies and the federal government must help fight hyper-realistic misinformation, witnesses told the House Intelligence Committee.
Fighting the spread of malicious deepfakes will require a two-pronged attack by both the government and tech industry, and could potentially involve the use of offensive cyberweapons, tech and national security experts told Congress.
Deepfakes—shockingly realistic but forged images, audio and videos generated by artificial intelligence—make it possible to depict someone doing things they never did or saying things they never said. While the tech can generate some entertaining content, it’s also becoming the latest tactic employed by foreign adversaries to spread misinformation around the globe.
If left unchecked, experts worry deepfakes could ultimately lead people to doubt what’s real and what’s not, which would have significant consequences for the political process, social discourse and national security. And as more manipulated media spreads across the internet, lawmakers are trying to figure out how to help the public separate fact from fiction.
“These tools are readily available and accessible to both experts and novices alike, meaning that attribution of a deepfake to a specific author, whether a hostile intelligence service or a single internet troll, will be a constant challenge,” House Intelligence Committee Chairman Adam Schiff, D-Calif., said during a hearing on Thursday.
Related: How Realistic Fake Video Threatens Democracies
Related: The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth
Related: D Brief: Trump falls for a deep fake; and a bit more.
Witnesses were quick to note that like any tool, deepfakes aren’t inherently good or bad. Manipulating media for art or entertainment can be perfectly healthy for society, University of Maryland Law Professor Danielle Citron said, but when it’s used to deliberately harm an individual or spread misinformation, the government and tech industry need to step in.
Already, adversaries like China, Russia and Iran are experimenting with the tech to sow discord, and federal leaders needs to come up with a plan to counter any misinformation campaigns directed their way, according to Clint Watts, a national security expert at the Foreign Policy Research Institute and the German Marshall Fund.
“We should already be building a battle drill, a response plan, for how we would handle [deepfakes]” directed at the 2020 election, as well as other national security targets, Watts said. For instance, he said, adversaries could use deepfakes to incite violence against American diplomats and military personnel based overseas.
When misinformation is detected, Watts said, federal agencies should first alert the public and correct the record on the inaccuracies, and then officials need to launch a counterattack. In the case of nation-state attacks, Watts recommended hitting the perpetrators with sanctions even more sweeping than those directed against GRU hackers after the 2016 election.
“You can move down the chain of command such that hackers and influencers and propagandists don’t want to work at those firms because they could be individually sanctioned,” Watts said. Depending on the situation, he said offensive cyberattacks would be an appropriate response.
“[It] would send a message out across the world, ‘if you’re pushing on us, there are options that we have,’” he said. “I do think the time for offensive cyber is at hand. If these foreign manipulators … actually knew we were going to respond in a very aggressive way, they would move away.”
While the government is best suited for deterring potential attackers, tech companies have a responsibility for flagging and demoting the malicious content itself, witnesses said.
David Doermann, a University at Buffalo professor and former DARPA program director, said it’s already possible to analyze videos and images to determine if they’re deepfakes, but it’s difficult to scale those efforts across an enormous platform like Facebook. He and other panelists said Congress should pressure social media companies to build a system for identifying and flagging deepfakes on their site. The content doesn’t necessarily need to be removed, they said, but there should be some sort of disclaimer added to inform viewers that it’s not real.
Citron also floated the idea of amending Section 230 of the Communications Decency Act to incentivize platforms more closely police malicious content. Today, the measure prevents platforms from being held legally liable for content generated by their users, but Citron suggested revisions that could compel companies to make a concerted effort to monitor their sites. Witnesses also proposed initiatives to build media literacy and teach Americans to better recognize fake content when they see it.
“We are already in a place where the public has deep distrust of the institutions at the heart of our democracy—you have an audience primed to believe things like manipulated videos of lawmakers,” Citron said. “There’s no silver bullet, but we need a combination of law, markets and really societal resilience to get through this.”