New project wants AI to explain itself
DARPA's XAI project wants machines to be able to make humans understand their reasoning, which would help build trust in autonomous systems.
It’s no secret that the U.S. military sees artificial intelligence in its future, for everything from swarming drones to automated cybersecurity practices. And despite its clear potential for military applications, top Pentagon researchers also have acknowledged that AI currently has a lot of limitations—machines can parse greater amounts of information more quickly than humans, but they still can’t think like humans.
But although machines can’t really understand the human mind, humans might be falling behind in understanding the machines they’ve created. That’s part of what’s behind an effort by military researchers called Explainable Artificial Intelligence (XAI), which looks to create tools that allow a human on the receiving end of information or a decision from an AI machine to understand the reasoning that produced it. In essence, the machine needs to explain its thinking.
“The problem of explainability is, to some extent, the result of AI’s success,” the Defense Advanced Research Projects Agency says in a solicitation for the project. Early AI systems followed recognizably logical patterns, DARPA notes, and were more expensive to build than they were effective.
More recent efforts have employed new techniques such as complex algorithms, probabilistic graphical models, deep learning neural networks and other methods that have proved to be more effective but, because their models are based on the machines’ own internal representations, are less explainable.
DARPA is aiming to help create a new generation of AI machines that can explain their machine learning-based reasoning to human users who depend on them. If successful, that would go a long way toward developing trust in man-machine systems, something the military services want to explore. The Air Force, for example, recently awarded SRA International a contract to focus specifically on the trust issues associated with autonomous systems.
The Defense Department has made man-machine teaming a key part of its Third Offset Strategy, designed to help DOD keep up with the ever-changing landscape of asymmetrical threats. Projects tied to the teaming concept include everything from autonomous air and sea vehicles https://defensesystems.com/articles/2016/05/20/navy-unmanned-systems-future.aspx to manufacturing.
At the core of successful projects with autonomous systems, as opposed to remotely controlled drones and other robotics, is trust in those systems’ ability to make decisions, which in turn relies on the systems’ ability to explain themselves.
DARPA said the program will start in May 2017 and last four years. Proposers should submit abstracts by Sept. 1, and full proposals by Nov. 1.
NEXT STORY: Navy ups contract for enhancing satellite comms