When it comes to building an ethical AI-augmented organization, AI business leaders want a level playing field.
This message became crystal clear in the VisionAIres January meeting, the first in the monthly VisionAIres Roundtables series.
That level playing field is also more likely to emerge from external regulation, rather than from within an organization, the members said.
The majority (65%) of AI practitioners believe AI should be regulated, according to research by Omdia, and the roundtable participants could not have agreed more.
“The government and regulators are responsible to make sure organizations are following the rules,” said one of the AI business leaders. “An organization can’t control this.”
Said another: “The government and regulators are responsible to make sure organizations are following the rules. The industry needs stricter regulation from outside the organization. It has to be legislative responsibility.”
“There will be regulation,” said another executive.
The interactive program (“How Do You Build an Ethical, AI-Augmented Organization That People Want to Work For?”) featured senior executives globally ranging from banking and finance to energy and education.
In addition to regulation, the AI-enabled tech systems themselves need oversight.
“AI systems need governance in their own right,” said one VisionAIres executive. “Building a trustworthy algorithm is the challenge.”
The potential conflict, of course, is the cutting of corners or lack of appropriate oversight by an organization, if not externally regulated.
“It’s very easy to create a bad model,” said one participant.
Another AI leader suggested that as advancements occur, monitoring the AI systems in operation can be challenging. “You can’t do it with learning systems, since you don’t know what they have learned.”
One of the more interesting points addressed was the age-old issue of the value of data input vs. the output of results.
“We have to worry seriously about garbage out,”” said one AI executive. “The latest version of “garbage in, garbage out,” is “garbage in, gospel out.”
"The output may be good, but that doesn’t mean useful good,” said another.
The underlying case is that humans tend to trust computers, with machines being perceived as unbiased.
“The typical class of people training these systems doesn’t have the experience on a wide society. We cannot allow back boxes to make decision about human life, like the loan-approvals process,” said another AI business leader.
The ethics challenges related to AI are numerous, with some more challenging than the next. Some noted were privacy, inclusiveness, security, safety, transparency, explainability, and accountability.
As to responsibility for AI ethics in the enterprise, it’s not one-size fits all, with suggestions including those responsible for delivery of the platform, the ethics committee within an organization, or the chief executive.
“It’s a cultural issue,” said one executive.
Some noteworthy insights from the VisionAIres business leaders:
A machine doesn’t have empathy
Need to be careful not to cut corners
It’s about the context around the data
Need data ethics principles
Everybody in the organization is responsible
Everyone is accountable, but not responsible
Participation in the monthly roundtable discussions is exclusively for VisionAires members only.
Despite the discussion of ethics in AI, which will be an ongoing discussion as part of the VisionAIres Council, the executives in the virtual discussion all are realistic about the overall aI business proposition.
It’s about using AI to solve business problems.
This content has been shared through the VisionAIres network, a worldwide collective that allows business and technical leaders alike, to connect, collaborate and share, all whilst being able to leverage - in a single place - Informa Tech’s plethora of premium AI for enterprise content.
To inquire about joining, please click here.