Hoping to begin trials in 2024
The US Department of Defense (DoD) is designing AI-based jet fighter piloting systems, which it says will begin live testing against humans in four years’ time, with the view of eventually phasing people out from some military aircraft.
In a speech at the department’s Artificial Intelligence Symposium, secretary of defense Mark Esper said that an AI program called Air Combat Evolution (ACE), based on an algorithm created by Heron Systems, will soon begin testing actual fighters.
The system recently beat a human pilot in five consecutive virtual trials carried out by DARPA in August, leading Esper to conclude that it had thus “demonstrated the ability of advanced algorithms to outperform humans in virtual dogfights.”
As well as the human operator, Heron’s system, which had simulated at least four billion flights and fights (or the equivalent of 12 years of experience as a human pilot) also beat seven rival AI programs in the competition; the company’s chief machine learning engineer, Ben Bell, credited the success to its use of deep reinforcement learning.
In his speech, the military chief insisted that DARPA wasn’t trying to replace human judgment and control in combat operations, but to augment them, calling the 2024 project “human-machine teaming.” This could mean several aircraft could be manned by a single pilot remotely, or enable pilots to take the foot off the proverbial pedal and let the algorithm take over.
He said: “We see AI as a tool to free up resources, time, and manpower so our people can focus on higher priority tasks, and arrive at the decision point, whether in a lab or on the battlefield, faster and more precise than the competition.”
Esper ‘s remarks were likely intended to temper expert concerns, such as those raised by program manager for DARPA’s Strategic Technology Office, Colonel Dan Javorsek, who believes the system’s victory in the simulation in August should be taken with a pinch of salt, and doesn’t indicate that AI systems are superior to human pilots.
Comparing the ACE project to technologies being developed by China and Russia, and the threat posed by potential sales to other authoritarian states, Esper said the US was trying to make headway without putting at risk “individual liberty, democracy, human rights, and respect for the rule of law.”
He added that the US military was the world’s first to lay out ethical principles for the use of AI – which stipulate that any programs relying on AI systems must be responsible, equitable, traceable, reliable and governable.
While the above code is not legally enforceable, it has seen efforts by the DoD to develop training programs for its personnel, including an intensive six-week course for pilots (the human ones) on AI in general, and the ethics principles in particular.