Twitter Bots Are Becoming More Human-Like: Study
In 2016, they were mostly retweeters on timers. Now they’re gathering intelligence.
Even as humans are getting better at recognizing bots — social media personas that are just software disguised as people — so these bots are growing more sophisticated and human-like. A new study by researchers at the University of Southern California tracks how — and suggests ramifications for public opinion and the 2020 election.
The study, published in the journal First Monday, looked at 244,699 Twitter accounts that tweeted about politics or the election in both 2016 and 2018. Using Indiana University’s Botometer tool, the researchers determined that 12.6 percent — about 31,000 accounts — were bots, a percentage that aligns with previous research.
A look at the bots’ tweets showed that most of their 2016 activity was, well, bot-like, as in rhythmically mechanical and largely composed of retweets. But in 2018, “bots better aligned with humans’ activity trends, suggesting the hypothesis that some bots have grown more sophisticated.” Moreover, the bots did a lot less retweeting.
But so did humans, the researchers found: “Human users significantly increased the volume of replies, which denotes a growing propensity of humans in discussing (either positively and negatively) their ideas instead of simply re-sharing content generated by other users.”
Bots are bad at replies, for obvious reasons. To make up for it, the bots shifted toward more interactive posts like polls and questions, seeking information on their followers, according to the study.
"Our study further corroborates this idea that there is an arms race between bots and detection algorithms,” wrote lead researcher Emilio Ferrara in a statement. “As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences."
The research was supported, in part, by the U.S. Air Force’s Office of Scientific Research.
Government officials on Thursday also addressed disinformation, and election security generally, at the INSA Summit in Maryland. U.S. adversaries are still trying to manipulate online information, they said, but both the government and the population are better able to respond and counter such efforts as 2020 approaches
“That is one major difference, is the awareness piece of it,” said Sujit Raman, an associate deputy U.S. Attorney General. “All of us are much more aware of what adversaries are trying to do.”
Local, state, and federal agencies are “infinitely” more aware, as are private firms, including social-media companies, of what’s going on in networks and how to share cyber threat indicators, Raman said. ”We are in a very different position, I would say a much more positive position, than we were in four years ago.”