Four Essential Questions For Developing An Ethical AI Framework

Ciarán Daly

May 27, 2019

6 Min Read

by Sanjay Srivastava

PALO ALTO - In February, an Executive Order was signed to launch the American Artificial Intelligence (AI) Initiative, which is partly designed to promote trust in AI among U.S. citizens. Then in April, U.S. lawmakers introduced the Algorithmic Accountability Act that would regulate automated decision-making systems for bias. Likewise in April, the European Union released its own ethical guidelines for evaluating AI applications for fairness.

All of these initiatives point to the need for ethical
frameworks for AI, especially as more Fortune 500 companies use the technology
to change the way they make decisions and serve customers. According to
findings from Genpact’s

latest AI 360 study, a quarter of senior executives say they plan to
fundamentally reimagine their businesses with AI and 54 percent plan to use it
to transform their processes.

Such frameworks would ensure that AI leads to sound decisions,
without negative effects on groups or individuals. As our AI 360 study shows, a
large majority (78 percent) of consumers expect companies to actively address potential
biases and discrimination from AI. So by having these frameworks in place, organizations
can develop a greater sense of trust with consumers, who know that their data
and information are being put to good and fair use.

As we look to establish ethical frameworks for AI, it is
important to consider these four essential questions:

1. Are we using AI for the right reasons?

Before any AI deployment, we should ask ourselves, “Are we using
AI for the right reasons?” After all, AI, as a tool, is neither good nor bad. What
distinguishes ethical applications is the intended use. For instance, AI can accelerate
the path we take to arrive at decision, leading to better customer experience
and outcomes. This is why many industries once plagued with lengthy
decision-making processes, such as banking, lending, and insurance, have gravitated
towards the technology.

In recruiting, an AI program can review job descriptions to
eliminate unintended gender biases by removing words that may be construed as
more masculine or feminine, and replacing them with more neutral terms. In
effect, HR departments can actually use AI to prevent bias in the hiring
processes.

We should incorporate an ethical evaluation of intended use prior to deployment. Moreover, continuous monitoring of the models is necessary to make sure there is no deviation from the use case towards anything unethical.

Related: How to use AI responsibly

2. Can we explain the reasoning path?

The purpose for using AI, as well as the data used to develop
the models and algorithms, should be fully transparent to impacted consumers. Explainability
of how a machine arrives at its recommendation is also a part of that transparency,
particularly in highly regulated industries.

While today’s AI applications do not have full explainability yet, some can provide breadcrumb trails, allowing us to trace a decision back to a single data point. For example, in commercial lending, AI can take thousands of balance sheets—even those in different accounting standards and languages—and understand all of the contents to calculate a risk score to approve or deny a loan.

Rather than just deny a loan, which can result in a poor customer experience and trigger compliance concerns, loan officers can use AI applications with built-in tracking. They can click and drill down to the specific document or footnote that led to the score and recommendation. If an auditor requests documentation or the customer has an inquiry, the officer can show exactly where and how the system came to its decision. Such transparency and explainability instills trust between all parties.

3. Can we recognize and mitigate AI bias?

The two main sources of bias are data and teams. With data,
this often means underrepresented and imbalanced datasets. For instance, if an
HR department uses personnel data from a homogeneous group and tries to use
that dataset for recruiting, then the algorithm will be biased towards the
initial sample and might only recommend similar people for positions.Bias from teams arises when we only
have a small group of people train the machines. So, the algorithms end up
unknowingly reflecting the thinking of a select few.

As part of an ethical framework for AI, we should encourage
diversity in data and teams to prevent biases. The goal is to have comprehensive
datasets in training that can address all possible scenarios and users. Thus,
we can minimize potential discrimination or favoring towards certain groups due
to race, gender, sexuality, or ideology. If we lack comprehensive data,
external data sources and synthetic data can fill in the gaps.

Likewise, we should strive for diverse teams made up of people with varying skills and backgrounds, including digital and domain talent—or better yet, “bilinguals,” i.e. people who can think from both sides of the equation. A diverse team can serve as an ethics committee to look for unethical use from multiple perspectives and monitor for unwanted outcomes.

Related: Businesses can now test AI for explainability and bias

4. How secure are the data and applications?

When we use data to feed algorithms, the information needs to
be secure. Or else, we run the risk of tampering or corruption that can skew
the machine’s output at the expense of customers. We should take active
measures to protect the data and applications, as well as assess for new vulnerabilities.

Security ties back to the need for governance over AI. While
our AI 360 study shows that 95 percent of companies say they are taking steps
to combat bias, only a third have the governance and internal control
frameworks to do so. For AI to provide ethical benefits for all, we need such frameworks
to monitor the models.

We have to look out for the things discussed above—deviation from intended use, initial or emerging biases, and potential detriment to people. One potential solution is a “visualization dashboard,” which can oversee all automation. A dashboard can provide a single view of how all AI applications, robots, and other intelligent automation are performing and if everything is safe and ethically sound.

As more Fortune 500 companies set their sights on AI, governments, regulatory bodies, and consumers will all be paying more attention to ethical use. We should be able to explain clearly why and how AI uses data to arrive at decisions—in a fair and secure way. Further, we need to monitor the machines’ activities to prevent ethical complications and unintended scenarios. By developing an ethical framework for AI, we can protect customers and build their trust in our companies and technology.

Catch up with Sanjay and the Genpact team at The AI Summit London, June 12-13. Find out more about how you can attend

Sanjay Srivastava is Chief Digital Officer of Genpact

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like