Nearly 80% of executives have encountered some type of ethical issue in AI

Deborah Yao, Editor

March 1, 2022

4 Min Read

Nearly 80% of executives have encountered some type of ethical issue in AI

Artificial intelligence is being implemented across organizations all around the world – bringing vast opportunities but also opening the door to new risks.

Nearly 80% of executives have encountered some type of ethical issue in AI, according to Manolo Almagro, managing partner at consulting firm Q Division during a session on ethical AI at Mobile World Congress 2022.

That is why designing an AI system to be ethical from the start is crucial to mitigating those potential minefields. But what is the right way for a company to put such ethical frameworks in place?

First, make sure development of the ethics around AI is not done in isolation but as part of a larger corporate mandate, such as ESG, said Richard Benjamins, chief AI and data strategist at Spanish telecom giant Telefonica.

“That’s a key component,” he said. Otherwise, “you won’t have enough momentum to push it forward.”

The next step is to determine the set of AI principles that works for your company or sector: how transparent it should be, how it can avoid discrimination, and other considerations.

Then do AI training for employees to raise awareness. “Employees have heard of it at a high level, but they don’t know what it means,” Benjamins said.

Next, develop a governance model to determine such things as who is responsible for what aspect of ethical AI, the escalation process if something goes wrong, whether the approach should be problem-based, and other factors, according to Benjamins.

Australian telecoms giant Telstra has several levels of governance – including a risk council for AI and data and a group comprised of executives from each business division, according to Noel Jarrett, its chief data and AI officer. Top management also is kept apprised on a regular basis.

Diversity of stakeholders is important because it can help the organization become more aware of potential biases.

Data handling should be front and center, Jarrett said. What information is being collected? How is it collected? How is it being stored and managed? Who owns the data?

Benjamins said it is helpful to run an ethical AI pilot with a few business units first to see how people in the organization will react: managers, technical people, ESG folks and other stakeholders.

Top down or bottom up?

Benjamins takes different approaches depending on whom he is approaching: top management or bottom-level workers.

At Telefonica, top management already believes in the ethical and responsible use of technology, he said.

However, for most businesses the traditional view is that implementing ethics could come at a business cost in terms of lost profits and missed sales opportunities. But that is changing as companies increasingly embrace ESG and other social mandates as part and parcel of their for-profit endeavors, Benjamins said.

Also, scandals about lapses in ethics at rival companies is a wake-up call to management that they could be next if they do not act. “That can help drive senior management to take it seriously,” he said.

As for frontline employees, such as customer service agents, make it easy for them to comply with the framework and tooling to streamline the process, Jarrett said.

Manage their view of ethical AI practices so they do not feel this is yet another requirement for them, another exam to pass – in addition to what they already do to ensure privacy, accessibility and security.

“This creates a defensive dynamic, which you don’t want,” Benjamins added. “You want to uncover problems and solve them.”

Instead, focus on the good that using AI can bring. “How do you use this technology to actually produce good things?” Benjamins said. By getting employees energized about the good AI can do, it will foster inclusiveness, bring a sense of purpose and ultimately help them accept it.

Note that a company’s common set of AI values should be acceptable across geographies, he added. However, the approach to disseminating such values would differ depending on whether the culture is more top-down or bottom-up. Some cultures want management mandates, others would rather take responsibility for it.

In either case, developing ethical AI practices does not have to be a build-from-scratch endeavor. There are guidelines for AI principles one can consider, as well as many open source tools, the panelists said. The GSM Association (GSMA) offers an AI Ethics Playbook as a guide.

In the future, ethical AI will be so ingrained in corporate culture that it will be automatically assumed to be part of business practices, predicted Mojca Cargo, senior market engagement manager AI4I at GSMA. “We won’t be talking much about it,” she said.

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like