Releasing an open framework to make AI more secure

Louis Stone, Reporter

October 23, 2020

3 Min Read

Releasing an open framework to make AI more secure

Microsoft is teaming up with non-profit MITRE and a host of tech businesses to minimize threats to machine learning systems.

The group has released the industry-focused, open source Adversarial ML Threat Matrix, to help detect, respond, and remediate threats to machine learning programs.

Step one is recognizing that there is a problem

MITRE was formed back in 1958 to help run the US Air Force's SAGE project, which led to breakthroughs in computing and networking due to its requirements to collect and process radar data from around the world, as it was looking for signs of Soviet nuclear attack.

MITRE has since expanded into a much larger and more nebulous organization that primarily serves as a US military and intelligence contractor, as well as an operator of federally-funded research and development centers (FFRDCs).

As the operator of the National Cybersecurity FFRDC, and a major cyber security contractor, the non-profit has long funded research into security. Since 2013, it has operated MITRE ATT&CK, a framework tracking cyber attacks that has been widely adopted by the security community.

Microsoft and MITRE said that they modeled the Adversarial ML Threat Matrix after ATT&CK, as security analysts are the primary audience for the framework.

Microsoft's 'data cowboy' Ram Shankar Siva Kumar and corporate VP Ann Johnson said in a blog post that it was crucial to get security analysts to take risks to machine learning seriously.

"Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern," they said. "This is a problem because cyber attacks on ML systems are now on the uptick."

In the coming years, attackers are expected to increasingly steal AI models, poison training data, or use adversarial samples to attack AI-powered systems.

The field of adversarial AI is evolving rapidly, so MITRE and Microsoft turned to the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, for help in developing defences against the most cutting edge attacks.

Other partners include Bosch, IBM, Nvidia, Airbus, Deep Instinct, Two Six Labs, PWC, and the Berryville Institute of Machine Learning.

In a post on MITRE’s website, Mikel Rodriguez, the company’s head of Decision Science research, argued that we're at the same stage with AI that we were with the Internet in the 1980s. Back then, he said, the focus was just on making the Internet work, not on security, "and we’ve been paying the price ever since."

Most of the focus on AI is equally on just making it work, with not enough consideration given to the security implications. "The good news with AI is that it's potentially not too late," Rodriguez said.

"So, I'm excited to work on this matrix on technical challenges around security, privacy, and safety. While there will be plenty of big problems ahead that we aren’t addressing with this initiative – we’ll be addressing the kind of fundamentals that were ignored during the early days of the Internet."

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like