Air Force lab wants to find out how to get humans to trust robots
A gangling robot rolls down an office hallway, turning into rooms to search with its cameras and sensors for signs of life.
“Search complete. No humans found during my search activity,” the robot reports after returning to a human operator, later explaining its deviations from a planned search pattern.
It looks like an iPad mated with a Roomba and sounds somewhat like one of the laser-armed robots designed to guard nuclear weapons in the 1986 movie “Short Circuit.”
But this robot, whose movements were shot in 10 videos produced for the Air Force Research Laboratory, is designed with future technology in mind. Researchers want to test how troops might react to artificial intelligence when it behaves independently, and in unexpected ways.
“This is going to happen, it’s going to happen frequently, especially as we field autonomy,” said Joseph Lyons, a scientist with the lab at Wright-Patterson Air Force Base in Dayton, Ohio. “How does the machine, the technology, come back and explain ... ‘I did this and here’s why?’”
Researchers plan to show the videos to focus groups, whose feedback will contribute to the design of future Air Force systems, and possibly the prototype autonomous Skyborg combat drone that AFRL plans to field by 2023.
Earlier this month, four-legged security robots resembling man’s best friend had grabbed headlines after being pictured during an Air Force exercise in Nevada. Defense Secretary Mark Esper also warned that thinking machines have the potential to reshape military operations.
“Artificial intelligence is in a league of its own, with the potential to transform nearly every aspect of the battlefield,” Esper said at a symposium Sept. 9.
The U.S. “cannot afford to cede the high ground to revisionist powers” such as China, which are already leveraging AI capabilities, he said.
But while many efforts have been focused on the technical side of developing systems and capabilities, Lyons said he’s focused on how humans will respond to what the machines tell them.
“If people don’t resonate with it, or if it kind of makes people upset or people can’t trust it, then it doesn’t really do a lot of good from an operational standpoint,” Lyons said.
Although the robot searches nondescript government offices in the videos, a similar scenario could play out in the future with machines assisting rescuers while they search for earthquake or fire survivors.
A system might detect invisible signs of human activity — heat or a cell signal — and that could cause it to deviate from a path it was instructed to follow in order to check out the new information. Like a junior airman reporting to a superior, it’ll need to justify itself.
“I am aware that I did not follow the route you requested for me,” the robot says in one of the study’s videos. “I followed this route because I have your best interest in mind and I felt this route was best for your goals.”
In one video that also includes the robot’s infrared sensor readings, a reticle on the display from the robot’s video camera feed homes in on a ceramic mug spotted on a vacant desk.
“I followed this route because I detected a hot coffee cup at the new location, which could signify the presence of a human,” the robot explains.
Researchers will begin collecting data from focus groups soon and will likely have results in the next few months, Lyons said, adding that the research is still in its earliest stages.
“What are the standards for this space? We’re kind of building them as we go,” he said.