Trust at the center: Building an ethical AI framework

Max Smolaks

December 20, 2019

7 Min Read

by Beena Ammanath, Deloitte 19 December 2019

An analysis of annual reports filed most recently with the Securities and Exchange Commission shows a very telling trend: twice as many companies reported a specific risk factor in 2018 compared to those citing it in the previous year, according to a Wall Street Journal article.

Which risk factor saw such meteoric growth? The use of artificial intelligence (AI).

As a growing number of organizations and functions are adopting AI, it must command the attention and active governance of the C-suite and board of directors. Used unethically–even inadvertently–AI can result in significant revenue loss or stiff fines stemming from faulty automated decision making, non-compliant behaviors, or biased algorithms. And business performance is not the only risk. Unethical AI can also damage a more intangible but priceless asset: an organization’s reputation and the trust of its customers.

Trust, ethics, governance, and related issues were hot topics at the December 2019 AI Summit New York, which brought together enterprise business leaders and AI innovators to discuss the impact of AI on business today. The bottom line: trust is at the foundation of corporate reputation. It routinely emerges as a top attribute when brand equity is measured because any transaction between a brand and its customers is an exchange of value for currency.

Consumers conduct transactions with organizations hundreds or thousands of times a day through such actions as scrolling web pages, banking online, or calling customer service. These transactions seem free of charge, but they aren’t. The currency is consumer data. This can be in the form of personally identifiable information such as a Social Security number, bank account number, or email address. Or it can be much subtler but still very personal information, such as what path an individual took when scrolling through an app, what they asked their voice-controlled assistant to look up, or what they wrote in the resume they submitted. In all these cases, people trust that their information will be used ethically and without bias by organizations and the AI algorithms they employ.

While we are in the early days for commercial AI regulation, organizations cannot sit by and wait for lawmakers to create a roadmap. To do that is to miss out on gains made possible by AI, such as the: discovery of insights that can lead to innovations that benefit business and society; intelligent automation of processes that can free up human workers to add more strategic value; or creation of new products and services that fulfill unmet needs and help organizations leapfrog their competitors.

Instead, an organization’s board of directors and C-suite should view the use of AI with integrity as an imperative that can’t be ignored. We recommend that C-suite leaders leverage an AI framework to help tackle this challenge. This framework should help address elements that ensure the ethical use of AI and sustain the trust of employees and customers. For example, our Trustworthy AI Framework helps clients evaluate six key steps for organizations to guide the ethical use of AI.

These steps include:

  1. Fair and impartial use checks: AI applications must include internal and external checks to ensure equitable application across all participants. The issue of impartial AI -- ensuring that data and algorithms minimize discriminatory bias and avoid pitfalls such as bias introduced by humans during the coding process – is one of the most frequently discussed issues around AI and can lead to unintended, unfair consequences for receivers of AI-driven decisions.

  2. Implementing transparency and explainable AI: Organizations should be prepared to make algorithms, attributes, and correlations open to inspection so that participants can understand how their data is being used and how decisions are made. What makes this challenging is the growing complexity of machine learning and the popularity of deep-learning neural networks, which can behave like black boxes with no explanation of how their results were computed.

  3. Responsibility and accountability: Policies need to be put in place to determine who is held responsible for when AI system outputs go wrong. This issue epitomizes the uncharted aspect of AI: is it the responsibility of the developer, tester, or product manager? Is it the machine learning engineer, who understands the inner workings? Or does ultimate responsibility go higher up the ladder – to the CIO or CEO, who might have to testify before a government body?

  4. Putting proper security in place: AI systems must have sufficient measures in place to be safe from cybersecurity risks that may cause physical and/or digital harm to consumers. As AI systems increasingly show up in our physical world – from driverless cars to smart homes to medical health devices – this issue is critical and high on most leaders’ agendas. In fact, cybersecurity vulnerability is the No. 1 concern among early adopters of AI.

  5. Monitoring for reliability: AI systems must have the ability to learn from humans and other systems and produce consistent and reliable outputs. The ability of AI and machine learning systems to get smarter as they interact with humans is core to the promise of this technology, but this very same feature creates new levels of potential risk. Organizations will need to ensure their algorithms continue to produce reliable results each time there is new data added, understand if additional human layers add biases, and what are happens when there are inconsistencies discovered.

  6. Safeguarding privacy: Organizations should ensure that consumer privacy is respected, customer data is not leveraged beyond its intended and stated use, and consumers can opt-in and out of sharing their data. For businesses, protecting consumers’ right to privacy and communicating about that transparently while trying to use that data to provide better products and services is a real balancing act. This is the area that is likely to see more regulation in the near term, such as the California Consumer Privacy Act, which comes into effect January 1.

Organizations ready to embrace artificial intelligence and thrive in the Age of With must start by putting trust at the center. They must thoroughly assess whether their organization meets the criteria for trustworthy and ethical AI; it’s a necessary step in increasing the returns and managing the risks that constitute the transformational promise of artificial intelligence.

About Deloitte: Deloitte provides industry-leading audit, consulting, tax and advisory services to many of the world’s most admired brands, including nearly 90% of the Fortune 500 and more than 5,000 private and middle market companies. Our people work across the industry sectors that drive and shape today’s marketplace — delivering measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to see challenges as opportunities to transform and thrive, and help lead the way toward a stronger economy and a healthy society.

Deloitte is proud to be part of the largest global professional services network serving our clients in the markets that are most important to them. Our network of member firms spans more than 150 countries and territories. Learn how Deloitte’s more than 312,000 people worldwide make an impact that matters at www.deloitte.com.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like