How to Detect Sarcasm with Artificial Intelligence
Humans make inferences about tone and meaning, but algorithms can find hidden relationships between words to detect irony and intentional falsehood.
A new AI tool funded in part by the U.S. military has proven adept at a task that has traditionally been very difficult for computer programs: detecting the human art of sarcasm. It could help intelligence officers or agencies better apply artificial intelligence to trend analysis by avoiding social media posts that aren’t serious.
Certain words in specific combinations can be a predictable indicator of sarcasm in a social media post, even if there isn’t much other context, two researchers from the University of Central Florida noted in a March paper in the journal Entropy.
Using a variety of datasets of posts from Twitter, Reddit, various dialogues and even headlines from The Onion, Garibay and his colleague Ramya Akula mapped out how some key words relate to other words. “For instance, words such as ‘just’, ‘again’, ‘totally’, ‘!’, have darker edges connecting them with every other word in a sentence. These are the words in the sentence that hint at sarcasm and, as expected, these receive higher attention than others,” they write.
The method relies on what the researchers refer to as a self-attention architecture, a method for training complex artificial intelligence programs called neural networks to give more weight to some words than to others, depending what other words appear nearby and what the program is tasked to do.
“Attention is a mechanism to discover patterns in the input that are crucial for solving the given task. In deep learning, self-attention is an attention mechanism for sequences, which helps learn the task-specific relationship between different elements of a given sequence to produce a better sequence representation” Ivan Garibay, one of the researchers, told Defense One. (The concept originally goes back to a paper by a German and Canadian researcher from 2016.)
Detecting sarcasm with algorithms may not seem to have much military relevance but consider how much more time people spend online than just a few years ago. Also consider the growing role of open-source intelligence, like social media posts, in helping the military to understand what’s happening in key areas where they might be operating. The work was supported by the Defense Advanced Research Projects Agency, or DARPA, through a program called Computational Simulation of Online Social Behavior. The program seeks a “deeper and more quantitative understanding of adversaries’ use of the global information environment than is currently possible using existing approaches.”
It’s not the first time researchers have tried to use machine learning or artificial intelligence to detect sarcasm in short pieces of text, like social media posts. But the method improves on previous efforts, many of which were based on training algorithms to search for too many very specific cues handpicked by the researchers, like words suggesting specific emotions or even emojis. That resulted in the algorithm missing many instances of sarcasm that didn’t have those features.
Other methods used neural networks to find hidden relationships. These tend to perform better, Garibay said. But it’s impossible to tell how the neural network reached the conclusion that it reached. Garibay says the key advantage of the new technique is that performs as well as other neural networks in detecting sarcasm but allows the user to go back and see how the model achieved the results that it did, which is something intelligence officials have said is essential for the use of artificial intelligence in the context of national security.
The next big challenge is handling ambiguities, colloquialisms, slang, “and coping with language evolution,” Garibay said.