Some industries had a head-start in building AI with ethical considerations in mind
To create an ethical AI-augmented organization, long-term approaches and strategies are important, and the responsibility for ethics in AI should be shared among stakeholders.
These were among the viewpoints shared in the most recent VisionAIres meeting, marking the second AI ethics discussion in our monthly virtual roundtable series.
In the first discussion on ethics in January, AI business leaders agreed that there should be a level playing field for ethics in AI, which they said is likely to emerge from external regulation.
Some industries have a lead on this, in instances where AI ethics was a logical fit.
For example, VisionAIres members pointed out that biotech and healthcare companies have the strongest governance relating to AI, since expectations and governance structures were already in place.
“In financial services there are controls on controls and there are a lot of protections in place,” said Bob Compton, IS director at RCI Financial Services. “It’s the misuse away from business that’s the biggest issue.”
Another example cited was medical device development, since evaluating potential unintended consequences forms a natural part of the process.
“The healthcare industry, in general, is shifting from an insurance care mentality to a healthcare company mentality,” said one VisionAIres community member.
While controversies in ethics can create short-term benefits because they lead to clicks and exposure, the business leaders agreed that companies need to focus on ethical behavior for long-term success.
“Sustainability is important,” said Tyler Folkman, head of artificial intelligence at Branded Entertainment Network (BEN). “You cannot just run a system of clickbait marketing, because at some point, that doesn’t work.
“At Ben, we have something we call the consensus triangle and there are three people in that triangle. There’s the advertiser, the people doing the content, and the audience consuming their content. If all three aren’t happy, then that’s a loss.”
Several executives mentioned the requirement to focus on customer engagement.
“We need to focus on the customer experience, and we need to focus on the customer behaviors,” said Shawn Xuewu Wang, head of digital innovation hub at China Eastern Airlines. “But how to balance is the key thing we need to do.”
Another issue raised was where AI ethics should reside with an organization. “Absolutely everybody who is engaged in producing data or creating training models, everybody engaged in design, engagement, deployment, and use,” said Natalia Modjeska, research director for AI and intelligent automation at Omdia.
“Ultimately, the responsibility is with the CEO and the board, but everybody engaged should be asking questions like ‘what are we building’ and ‘what problems are we trying to solve,’” she said. “The more people ask the questions, the better the governance will be.”
Bogdan Grigorescu, AI implementation manager at Combined Intelligence and operations and QA lead at Apple, suggested the corporate environment does not permit too many questions and that the board of an enterprise “can’t know everything,” so it needs an ethicist in an advisory position, and Modjeska agreed.
“We have to censor ourselves at some point,” Grigorescu said. “Anything that is too much is bad. Too much food is bad, too much money is bad, all extremes are bad. The most sustainable model is to know when to say no.”
“It’s AI ethics, but it’s more than AI ethics,” said one AI business executive. “It’s ethics in general.”