Understanding the decision kill chain

Sebastian Moss

May 26, 2020

3 Min Read

Understanding the decision kill chain

Defense giant Lockheed Martin has teamed up with Canadian ‘explainable AI’ startup DarwinAI.

The strategic collaboration aims to improve Lockheed Martin’s customers’ understanding of, and visibility into, AI-based solutions.

Lockheed joins a growing list of DarwinAI customers that includes Audi, Intel, Nvidia, Honeywell, and Voyage.

Explain yourself

Explainable artificial intelligence is a rapidly developing field focused on making the decision-making process of AI-based systems understandable to humans. Neural networks can often be perceived as ‘black boxes,’ drawing conclusions that appear arbitrary without some understanding of their thinking.

DarwinAI claims that its GenSynth Explain can help shine a light into the box, helping humans understand why an AI system is suggesting a course of action. 

“Explainability is a critical challenge in our industry,” Lee Ritholtz, director and chief architect of applied artificial intelligence at Lockheed Martin, said. “Understanding how a neural network makes its decisions is important in constructing robust AI solutions that our customers can trust.”

Sheldon Fernandez, DarwinAI’s CEO, added: “Negotiating AI’s black box problem in a practical, actionable manner is a key focus for us this year. Our collaboration with a leader in the aerospace industry such as Lockheed Martin underscores the importance of trustworthy AI solutions.”

Lockheed Martin is one of the world’s largest aerospace, information security, and tech companies.

The vast majority of its revenue comes from defense contracts, like the F-35 Lightning II plane - the lynchpin of the Joint Strike Fighter program, the largest and most expensive military project in history, with a projected average annual cost of $12.5 billion and an estimated program lifecycle cost of up to $1.5 trillion.

Modern defense equipment is increasingly reliant on artificial intelligence, as evidenced by the massive $800m DoD AI contract awarded last week; understanding the decisions of AI systems is likely to become a matter of literal life or death.

Back in 2017, the US military's research agency, DARPA, launched a five-year initiative simply titled 'Explainable Artificial Intelligence (XAI).'

"The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems," program manager Dr. Matt Turek said in 2018.

"Explainable AI - especially explainable machine learning - will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners."

An IBM-backed study published earlier this year found that the DARPA project served as an important catalyst for explainable AI, but cautioned that a lot more progress was required.

One theoretical danger, when using a model-free approach, is that the explanations provided by the AI system "may no longer be true but rather be whatever users find to be satisfying," the authors said. Put simply, the AI would be lying to you.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like