VisionAIres: ‘AI will always be unfair’VisionAIres: ‘AI will always be unfair’
AI business leaders participating at the roundtable discussion recently hosted by the VisionAIres community suggested that to one degree or another, bias will always be found in AI
June 16, 2021
The notions of bias and fairness play an important role in the development and deployment of artificial intelligence.
AI business leaders at the roundtable discussion recently hosted by the VisionAIres community suggested that to one degree or another, bias will always be found in AI.
“AI is biased by nature,” said one of the VisionAIres members. “In the end, you’re looking for an end result.
"A very simple scenario is, I want to make money, versus I don’t want to make money. There’s a very fine line, but in the end, it is a biased model.”
“AI will always be unfair, just like life,” another member said. “It’s because there are multiple definitions of what is fair. Even if you built an algorithm that through your lenses is fully fair, you’re choosing a definition of fairness. We have to choose what to be fair about.”
This was a consistent message from the second virtual roundtable focusing on best practices in talent development and internal alignment, which featured AI leaders in sectors including media, automotive, finance, airline, insurance, research, and healthcare.
Built by humans
AI leaders at the first roundtable in March focused on the best approaches to getting started.
“Bias is likely because AI is built by humans,” said KV Dipu, president for operations and customer experience at Baja Allianz General Insurance. “However, AI gives you efficiency and it gives you consistency.”
Regarding trust in AI, Dipu noted that customer contact is generally related to a customer having an issue to resolve, and that better conversational agents can increase consumer trust in the technology.
Some organizations also find that bias is discovered or identified as a project moves forward.
“We’re about 18 months into our journey,” said Simon Mortimore, assistant director for business information at South Central Ambulance Service NHS Foundation Trust. “We didn’t do it as a research project, we went and got real-world problems. Having real-world problems to develop our models really helped.
“As to internal buy-in, a lot of my internal customers don’t really care. They just know it works. It’s my job to understand the details and put the governance around it,” Mortimore said. “AI doesn’t introduce the bias, it just reflects the biases that were always there.”
VisionAIres members agreed that issues around bias are not unique to artificial intelligence.
“Ethics and bias is not an AI problem in and of itself, it’s a problem in general,” said Tyler Folkman, head of artificial intelligence at Branded Entertainment Network (BEN). “It did not start with AI and I’m sure it won’t end with AI.
“The issue is, what is the bar and what is the measurement, and how do you decide when something is not biased enough, while obviously you don’t want to be biased at all.”
Business leaders also focused on transparency, and offered some tips.
“We need to disclose how we build our AI systems and applications we put out there into the wild,” said Natalia Modjeska, research director for AI and intelligent automation at Omdia.
For internal alignment, VisionAIres members have consistently advocated for organization-wide approaches to AI.
Mark Beccue, principal analyst for AI and NLP at Omdia, suggested that AI projects can run into organizational issues, such as executives not understanding the lifecycle of AI, or not realizing it’s different from other technologies such as software, and requiring adaptations that may impact corporate culture.
“It’s not just what goes on in businesses, in the AI groups, or the analytics groups, it’s particularly visible in the research environment,” said Richard Self, senior lecturer in governance of advanced and emerging technologies at the University of Derby.
“It’s very much the feel of ‘the technology says we can do this, so we’ll go and do it.’ This is where the alignment starts. How do we get the message to researchers, doers, and users of this technology to think much more around the position of ‘yes, we can. But should we?’”
Weighing what can or should be done still comes down to humans participating in the process.
“AI systems have an output that can and will impact anyone on the planet in a different way at some point in time,” said Bogdan Grigorescu, AI implementation manager at Combined Intelligence and Operations and QA lead at Apple. “It could be a big way or a small way, but it will.”
“If we don’t question ourselves, good things will not come out.”
VisionAIres is a curated community dedicated to the advancement of AI for positive change. It brings together business leaders, both virtually on the community platform, and in person, to collaborate, learn and exchange ideas, which lead to action.
About the Author(s)
You May Also Like