AI Bias Isn't A Data Issue - It's A Diversity Issue

Ciarán Daly

May 8, 2019

9 Min Read
Black and white graphic showing male mars symbol at the center surrounded by smaller female venus symbols

by Ciarán Daly

LONDON - Bias is everywhere. It’s an issue that crosses the boundaries between culture, mindset, data, design, and system architecture. It spans from unconscious decisions to overt prejudice.

Bias is also an increasingly pressing issue in the field of AI. A Genpact survey finds that 78% of consumers believe it’s important that companies fight AI bias, while a further 67% expressing concern about an AI discriminating against them. However, companies are yet to meet this demand—the same survey found that only 34% of all companies have established internal frameworks to mitigate AI bias.

What does this bias look like? There’s already many
well-documented cases out there. One flawed facial

recognition tool from Amazon exhibited an error rate of over 30% for
dark-skinned women, while recruitment tools continue to favour certain
candidates over others based on their gender or race.

While there is a case to be made against consciously biased algorithms, many biased AI decisions stem directly from the datasets the algorithms have been fed. If too many white male faces are fed into a facial recognition algorithm at the expense of other demographics, for example, then the algorithm learns to associate ‘faces’ with ‘whiteness’ or ‘maleness’.

Getting AI right for public use then means getting the data right. The overwhelmingly male-dominated nature of the field, coupled with institutional recruiting bias and the lack of a supportive environment for women and ethnic minorities in the field, can correlate directly with incidences of bias in AI decisionmaking. As Payal Jain—Managing Director of JCURV and Chair of Women In Data—puts it, you’ve got to have diverse teams to even spot bias in the first place.

“There’s three things that are really important when we start thinking about AI and machine learning,” explains Payal. “It’s not so much about data—it’s all about people. Firstly, we’ve got to be aware of our own biases. Secondly, we need diverse teams to work with the technology. With 78% of people working in AI being male, there are biases that they naturally will not spot. Finally, we’ve got to make sure we’re giving the machines non-biased datasets.”

Related: AI sector facing a diversity 'crisis'

These considerations aren’t just interrelated—they’re institutional problems. While bias may not always be the product of overt discrimination, diversity ‘blind spots’ in recruitment and management practices have created the conditions for a dramatically unbalanced AI talent landscape. Last year’s Global Gender Gap Report from the World Economic Forum discovered that, while 78% of AI professionals are male, a mere 22% are female.

Initiatives to address this disparity—and the wider gender disparity in STEM subjects—are gaining momentum in the classroom up to the boardroom. However, with AI deployments growing by the month, it’s time for enterprises and tech firms to start taking concrete steps today—before this issue becomes a crisis.

Photograph of Payal Jain, MD at JCURV

Change business culture to address AI bias

For the last 19 years, Payal has worked in banking and finance, holding various executive roles at FTSE 100 banks running both analytics and commercial teams. After leaving banking last year, she became a Managing Director at JCURV, a consultancy company helping organisations with their data strategy. Off of the back of an impressive career, she’s pursuing her passion: trying to get more women into analytics, data, and AI, through her role as Chair of Women In Data (WID), a non-profit which aims to encourage and promote women working in data science.

“Lots of organisations regularly approach [WID] saying they want to recruit more women in data, but that they simply don’t exist. We know this isn’t true—we’ve got 20,000 women in our network,” Payal says. “I went into one particular company that said this and asked them to show me their recruitment process. They were so proud, saying they’d eliminated bias in their recruitment process by using AI to filter CVs.”

Related: From tapping to talking - three bumps into the road

Although the company in question believed they had eliminated recruitment bias thanks to AI, its team remained male-dominated. What the machines were doing, Payal explains, is examining the performance of previous joiners to the team in the past—of which there were more men than women. The machine in turn inferred that men are more likely to get the job over women, with the data indicating they were more capable than women in ML and AI.

“If a human had reached that conclusion, it just wouldn’t be
right—but because a machine did it, we think it’s eliminated bias. In fact, what
the machine learned was the human biases that were already present in the
process,” says Payal. “I think we need to rip up the rulebook on how we’ve been
recruiting.”

Diversity is critical

to meeting the needs of your customer base

The AI skills crisis is by now well-documented, and the wheels
are already rolling on large-scale initiatives to quickly cultivate AI talent.
At the industry level, a slew of online courses (MOOCs) free to the public have
been released with an eye to retraining and reskilling. Talk of teaching

kids AI in schools is growing, while contests in which young girls can

compete to develop AI for good are already underway. These efforts are buttressed
by community organisations in the industry like Black in AI, Queer in AI, and Payal’s own Women in Data.

Slowly, we’re witnessing a cultural shift, and over time,
these efforts should hopefully go some way in addressing the issues in the
industry. But what can businesses do today?

“Of course, we need to ensure we’re not going to run out of talent in the future, but there’s some things we can start acting on,” Payal argues. “For instance, look at your own teams that are working with AI and ML today. What is the profile of that team? This issue is much broader than gender—it crosses ethnicity, class, education, and diverse background. You need people from all levels because having diverse teams helps solve the key challenge, which is meeting the needs of your customer base. That customer base is much more representative of the population as a whole—and AI needs to work for them.”

Related: Tips for avoiding bias in AI

“Fundamentally, this needs to start receiving recognition at
the leadership level,” Payal says. “I love the recent launch of the
government’s Centre

for Data Ethics and Innovation, which has raised the profile of ethics and
diversity in AI in terms of this being a key responsibility of working with
data. Could we start seeing Chief Data Ethics Officers in the future? I think
it’s an interesting idea.”

“Secondly, there’s ensuring we’ve got diverse teams in place. The two can run in parallel, but eventually they need to come together. For any CDO out there, you’ve got to challenge yourself and understand that the more diversity you have on your data teams, the more balance there’ll be in its usage and outputs.”

Diversity once hired

– a supportive environment ?

Addressing this crisis may be possible—but leads to a new
problem, which is the issue of supporting people from diverse backgrounds through
the rest of their career. Today, explains Payal, the average AI data scientist stays
in their role for roughly 12 months. Three years ago, they’d stay an average of
4-5 years.

“That’s a massive change—there’s a proper war of talent,” she argues. “Each time you go and recruit, it probably costs 7-9k each time. How many organisations are honestly putting that money back into individual development and their career path? How many are investing in mentors and coaches, or skills training to help people build a career in the organization?”

Related: Google employees are in open revolt over transparency, harassment, and AI bias. How did we get here?

Beyond individual career development, there’s also the
question of culture. What happens once someone is hired into an environment in
which they may realistically be the only woman or the only person from a minority
background in the room? What companies are often lacking, argues Payal, is an
environment in which these people are allowed to both succeed and fail—but where
their talent is continually nurtured.

“We need to look at how we can educate other colleagues to
make things less difficult, because it’s hard enough as it is for individuals
who may end up feeling and looking really different. I remember walking into a
board meeting where it was 49 men and then me,” Payal says. “Even if you’re
confident, credible, and know what you’re talking about, you still think ‘oh my
gosh, what am I doing here?’. In those situations, you think, ‘I’ve just got to
go for it because I know what I’m doing, I’m good at what I do.’ You’ve got to
crack on, but there is still a role for education to make things easier.”

Addressing the issues - next 12 months

With 20% of business executives saying that their companies will deploy AI across their whole business function in 2019, adoption is growing everyday—and so is the risk of biased AI decisionmaking. This needs to be addressed in the next 12 months, or companies will risk losing the trust of the public, their customers, and their staff in AI.  

“What we’ve seen in the last 18 years is a four-fold increase in men entering the industry and only a 68% increase in the number of women,” explains Payal. “That’s sad, because it means there’s more men entering than women. It’s important to start at the grassroots and get more girls interested in STEM subjects, because AI and ML are the future of this world. It’s only those organisations that get it right in their recruitment process that will enable humanization of ML outputs. AI and ML are the future of this world, and if we don’t get more girls coming into the industry, we’re potentially going to have to live with the negative outcomes.”

Based in London, Ciarán Daly is the Editor-in-Chief of AIBusiness.com, covering the critical issues, debates, and real-world use cases surrounding artificial intelligence – for executives, technologists, and enthusiasts alike. Reach him via email here.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like