When Military Robots Can Predict Your Next Move
New research may enable robotic armed guards — or just help self-driving cars get through a four-way stop.
Years from now, when a robot outdraws you in a gunfight, a 2015 algorithm may be the reason why.
The algorithm, by two University of Illinois researchers, opens the door to software that can guess where a person is headed—reaching for a gun, steering a car into armored gate—milliseconds before the act plays out. Researchers, Justin Horowitz and James Patton undertook the work under a National Institutes of Health Grant, as described in “I Meant to Do That: Determining the Intentions of Action in the Face of Disturbances” in the journal PLOS ONE. The idea was to help robots help humans — by taking the steering wheel when a driver makes a bad decision, or perhaps activating an exoskeleton when a patient with a weak arm reaches for an object. But the algorithm, broadly speaking, might also help fly a plane or anticipate the next move by a suicide bomber or gunman.
“Imagine that a terrorist runs toward a crowd of VIPs with a bomb strapped to their chest, but they're tackled before they can succeed. This technology (alongside a good deal of supporting technology) would be able to determine who within the crowd they were aiming for,” Horowitz wrote in an email to Defense One.
“We want to temper what we say about security because it has not been tested and would require some restrictions on what needs to be known, but we have also thought about applying this to dynamic security situations. An example might be understanding what a person intends to do,” Patton added.
To test the algorithm, they gave a joystick to five men and three women between the ages of 24 and 30. The subjects were instructed to reach out with the joystick 730 times under various conditions, including ones that obstructed their motion. The tests proved the algorithm’s ability to infer, within tenths of a second, where the subject was headed.
“It doesn't predict the person's goal. It recovers the entire path the person would have taken either as it happens or after the fact. Because it's a path, there's no right/wrong. There's just an estimate and some uncertainty. We had to compare one group of paths (undisturbed reaches) with another (intent during disturbance). The stats couldn't tell them apart for some people, but five of the subjects' intents differed from their undisturbed movements about 150ms after the onset of disturbance,” said Horowitz
What could you use this for? If you’re Russia, which has deployed armed ground robots to monitor missile bases, your security droids just became a lot better at discerning whether or not they should shoot and at what. If the U.S. ever puts armed robots on the ground — something Defense Department is not currently tempted to do because of the danger to friendly forces — the algorithm could make those robots safer and more capable.
The Pentagon is also on the lookout for software that could help fly aircraft, such as DARPA’s Aircrew Labor In-Cockpit Automation System, or ALIAS.
Of course, we need machines to infer human action and movement for reasons beyond helping them not to shoot us. It is one of the thorniest problems facing modern-day roboticists in healthcare, autonomous driving, and beyond.
Sebastian Thrun, the brain behind Google’s self-driving car program and the team that won DARPA’s 2005 Grand Challenge, has talked about this a lot. Currently, Google’s robot cars rely on the company’s extensive map data.
“Everything we do uses maps,” he told a group of artificial intelligence researchers and journalists in 2012. “People do this too. They memorize things. Horses do it better; but none of these memorize in 3D the way self-driving cars do.”
Self-driving cars, which draw upon data from the cloud and 3D topographical maps 64 layers deep, have a lot more information to bring to the task of driving than do we, with our limited human brains. Yet we can outperform artificial intelligence in our ability to process what Thrun calls “momentary perceptual input.”
Four-way stops can stump a self-driving car, including the Google one back in 2012. There are strict rules for who should go first at such an intersection, but ultimately, a four-way stop with people is a game of chicken. Drivers decide who goes first on the basis of very subtle, somewhat primal, signals about strength, weakness, and intent. To succeed in this task, a self-driving car had to learn to nudge forward when faced with the prospect of being cut off at an intersection.
“It has to be assertive,” said Thrun. “The only way to get right-of-way in San Francisco is to go.”
When robots can detect your intent in the same way that it took us millions of years to evolve the ability to do, they may save your life, or at least give you the right-of-way.