Large businesses launch their own AI ethics initiatives

Comparing approaches to ethics at Microsoft, Bosch, Google, and others

Chuck Martin, Editorial Director AI & IoT

April 22, 2021

3 Min Read

As technological advancements continue to push artificial intelligence forward, numerous businesses are introducing ways to incorporate ethics into their AI-driven operations.

Ahead of any potential regulatory efforts by governments around the world, large companies are launching their own AI ethics initiatives for both internal and external protections. Here are some examples of approaches to responsible AI.

At Microsoft, three groups (the Office of Responsible AI, the AI, Ethics, and Effects in Engineering and Research Committee, and Responsible AI Strategy in Engineering) put the company’s responsible AI principles into practice. The six principles are:

  1. AI systems should treat all people fairly.

  2. AI systems should perform reliably and safely.

  3. AI systems should be secure and respect privacy.

  4. AI systems should empower everyone and engage people.

  5. AI systems should be understandable.

  6. People should be accountable for AI systems.

The Bosch Group, which has been conducting AI research for years, aims to “incorporate AI into many of its products and services, thus making Industrial AI one of its core competencies.”

In 2017, the company aggregated its existing competency centers to create the Bosch Center for Artificial Intelligence, where it conducts “cutting-edge research that focuses on safe, secure, robust, and explainable AI.”

The center is responsible for designing and implementing AI for technologies across Bosch business sectors. Bosch created five principles for AI:

  1. All Bosch AI products should reflect our “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility.

  2. AI decisions that affect people should not be made without a human arbiter. Instead, AI should be a tool for people.

  3. We want to develop safe, robust, and explai

  4. When developing AI products, we observe legal requirements and orient to ethical principles.nable AI products.

  5. Trust is one of our company’s fundamental values. We want to develop trustworthy AI products.

Despite the apparent turmoil within its ethics team, Google created the “Objectives for AI Applications” for internal guidance. Its AI guidance comprises seven goals that AI systems should strive towards:

  1. Be socially beneficial.

  2. Avoid creating or reinforcing unfair bias.

  3. Be built and tested for safety.

  4. Be accountable to people.

  5. Incorporate privacy design principles.

  6. Uphold high standards of scientific excellence.

  7. Be made available for uses that accord with these principles.

BMW Group, building on fundamental requirements developed by the EU for trustworthy AI, introduced seven principles in 2020 covering the use of AI within the company. The seven principles for the development and application of AI at the BMW Group are:

  1. Human agency and oversight, to assure human monitoring of decisions made by AI application with the ability for humans to overrule decisions made by algorithms.

  2. Technical robustness and safety, so AI applications observe relevant safety standards to lower the risk of unintended consequences.

  3. Privacy and data governance, extending BMW’s data privacy and data security measures to AI applications involving storage and processing.

  4. Transparency, to ensure that AI applications can be explained, with open communications related to technologies used.

  5. Diversity, non-discrimination and fairness, for the intent of building AI applications that are fair while preventing non-compliance.

  6. Environmental and societal well-being, committing to creating AI applications that promote the well-being of customers, partners and employees.

  7. Accountability, to ensure that all AI applications work responsibly by identifying, assessing and reporting risks related to good corporate governance.

Twitter, led by its ML Ethics, Transparency and Accountability (META) team, recently created a company-wide initiative called Responsible ML, comprising four pillars:

  1. Taking responsibility for algorithmic decisions.

  2. Equity and fairness of outcomes.

  3. Transparency about our decisions and how we arrived at them.

  4. Enabling agency and algorithmic choice.

While there are numerous governmental guidance and regulatory plans and approaches in the works, savvy companies are recognizing the need to take steps now to assure that their business methods relating to AI are in line with traditional, acceptable business practices.

AI business leaders in the VisionAIres community are working to define a roadmap for Ethical AI, starting with the organization’s first industry roundtable held in January and the upcoming discussion scheduled for April 29.

Recent research by Omdia found that the majority (65%) of AI practitioners believe AI should be regulated. While awaiting any potential regulation, smarter business leaders are taking it upon themselves to get started.

AI is not yet at a stage where it can analyze its way towards ethical behavior. That still takes smart people.

About the Author

Chuck Martin

Editorial Director AI & IoT

Chuck Martin, a New York Times Business Bestselling author, futurist and columnist, is Editorial Director at Informa Tech, home of AI Business, IoT World Today and Enter Quantum. Martin has been a leader in emerging digital technologies for more than two decades. He is considered one of the foremost Internet of Things (IoT) experts in the world and his latest book is titled "Digital Transformation 3.0" (The New Business-to-Consumer Connections of The Internet of Things).  He hosts a worldwide podcast titled “The Voices of the Internet of Things with Chuck Martin,” where he converses with top executives from the companies driving the Internet of Things.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like