Virginia Tech's Team Valor semi-autonomous ESCHER (Electromechanical Series Compliant Humanoid for Emergency Response) robot uses LIDAR laser mapping to create a 3-D image of its surroundings during the second day of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge at the Fairplex June 6, 2015 in Pomona, California.

Virginia Tech's Team Valor semi-autonomous ESCHER (Electromechanical Series Compliant Humanoid for Emergency Response) robot uses LIDAR laser mapping to create a 3-D image of its surroundings during the second day of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge at the Fairplex June 6, 2015 in Pomona, California. Chip Somodevilla / Getty Images

The big AI research DARPA is funding this year

The Defense Department’s key research arm will experiment with ethical chatbots and new robot super pilots.

In its most recent budget request, the Defense Advanced Research Projects Agency, or DARPA, is asking for increased spending for a handful of key AI projects focused on human-machine teams, AI reasoning, and highly autonomous AIs that follow the Defense Department’s AI ethics principles. 

Perhaps the most important AI program DARPA is looking to fund in FY 2025 is the Rapid Experimental Missionized Autonomy, or REMA, which seeks to “enhance commercially available and stock military drones with a subsystem to enable autonomous operation.” In other words, it would give remotely piloted drones purchased anywhere new powers to make decisions. DARPA is asking for $13.8 million for the program this year, up from a request for $5 million last year, in order to “continue to develop software, integrate with other performers, test, refine, and retest REMA solution,” through multiple development cycles. 

Another program, called Autonomy Standards and Ideals with Military Operational Values, or  ASIMOV, will test how well Defense Department autonomous weapons adhere to the department’s safety and ethics principles. 

They are requesting $22 million this year, up from $5 million last year, to test autonomous weapons software against complex scenarios involving ethical decisions. 

A big theme among all the AI programs getting first funding or increased funding this year is the idea of human-machine interaction, or “symbiosis,” as DARPA calls it. In one example, a new program called Access in AI and Human-Machine Symbiosis will seek to make chatbots “capable of realistic and positive dialog,” and “initiate designs for [large, pre-trained models] supplemented with legal sources to propose legal actions to deter adversaries.” DAPRA is requesting $13 million for it.

Another new program will look at large language models similar to ChatGPT to better understand how well they do abstract reasoning and “initiate development of techniques to enable transparent and logical communications between humans and AI models.” DARPA is requesting $9.5 million for that.

DARPA also wants to build off of its success with AI pilots and is requesting nearly $41 million—nearly double the amount requested for it last year—for a program called Air Intelligence Reinforcements or AIR. They’ll look to “automate tactical control tasks transforming junior pilots from low-level tacticians into high-level mission commanders. For unpiloted platforms, AIR will enable vehicles to perform missions with minimal human oversight.” 

Previous DARPA funding in AI led to advancements that moved to the commercial world, such as SIRI. So military advancements in the newest AI technologies could well shape the field for decades to come