Three projects move under the wing of the open source organization

Louis Stone, Reporter

June 29, 2020

3 Min Read

Three projects move under the wing of the open source organization

IBM is donating some of the work it’s been doing on trusted artificial intelligence models to LF AI – an umbrella organization within the Linux Foundation that supports open source innovation in machine learning and deep learning.

The LF AI technical advisory committee this month voted to host and incubate three projects.

All three were originally developed at IBM Research – Adversarial Robustness 360, AI Fairness 360, and AI Explainability 360.

The projects

First, there’s the Adversarial Robustness 360 Toolbox (ART), a Python library for machine learning model security, allowing users to evaluate, defend, certify, and verify machine learning models and applications, protecting them against novel treats like data poisoning – whereby malicious users inject false training data with the aim of corrupting the learned model.

Then there’s the AI Fairness 360 (AIF360) toolkit, built to help detect and mitigate unwanted bias in machine learning models and datasets. It uses around 70 metrics to test for biases, and 11 algorithms to mitigate bias in datasets and models.

Finally, IBM is donating the AI Explainability 360 (AIX360) toolkit, a collection of diverse algorithms, code, guides, tutorials, and demos that support the interpretability and explainability of machine learning models.

The projects will be continued under LF AI's Trusted AI Committee, which now includes representatives from IBM, AT&T, Tencent, Huawei's Futurewei, and the Institute for Ethical AI and Machine Learning.

“Donation of these projects to LF AI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM said in a blog post, authored by Todd Moore, Sriram Raghavan, and Aleksandra Mojsilovic.

“Our responsibility is to not only make the technical breakthroughs required to make AI trustworthy and ethical, but to ensure these trusted algorithms work as intended in real-world AI deployments.”

The move comes just weeks after IBM said that it would stop developing general-purpose facial recognition amid growing awareness of the bias prevalent in most such tech. The company did not respond to requests for comment on what it meant by “general-purpose.”

This Monday, IBM also announced the latest recipient of its Open Source Community Grant, which seeks to foster new tech opportunities for underrepresented communities. The $25,000 will be awarded to the Colombia-based PionerasDev, a nonprofit that helps women and girls in Colombia learn how to code.

These efforts come as governments around the world increasingly look to place regulations on big tech companies like IBM, particularly in the area of AI.

Earlier this month, forty-one countries came together to form the Global Partnership on Artificial Intelligence, a group working to support the responsible and human-centric development and use of AI – through legislation, if necessary.

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like