It’s no secret that the more diverse and inclusive the workforce, the richer the business outcomes.
A 2021 McKinsey report highlighted that companies in which women were “well represented” in senior roles were up to 50 percent more profitable than those in which diversity was not a consideration.
With such a clear link between Diversity, Equity and Inclusion (DE&I) and business success, it’s no wonder that businesses are turning to technology to help drive the kind of change we need to create workplaces where everyone is welcome and can thrive.
Just as a hammer or a car allows people to do things they couldn’t do bare-handed, so too does artificial intelligence technology.
AI allows humans to use data in ways they can’t do alone. However, just as with a hammer or a car, the thumb or the driver are also at risk without proper safeguards – particularly when it comes to driving diversity using AI.
AI doesn’t work alone though, as humans are a key piece of the process to building and training successful models.
In order to implement machine learning and deep learning algorithms, biases in the historical data can result in models that may include human decisions that can unintentionally discriminate.
Even when many fields that directly relate to potential biases are removed (such as gender or race), AI can still replicate the historic challenges through inferred information.
As an example, Amazon canceled an AI recruiting algorithm it was testing using legacy CVs as it was gender-biased against women, due to most of the CVs used originating from men.
Used correctly, AI has transformative potential as an enabler of DE&I by providing the data-driven insights needed, but as with any new undertaking, if the core foundations are unstable, the end result will be unreliable.
The human element – and by extension the unintentionally biased data these AI projects can be fed – can hinder real change by adding more unintentional human bias back into the mix.
The Diversity Paradox
Before we dig into the challenges with AI technologies in terms of possible bias, it’s important to understand we, as humans, are all biased in some way due to individual circumstances, upbringings, or emotional reactions to different stimuli.
It’s because we’re human that we have those biases.
Using data to acknowledge these inherent biases exist, and working towards mitigating them, is the first step in driving a truly successful diversity integration.
Equally, it’s not the AI technology itself that discriminates. Unlike human beings, machines lack the natural biases that can encourage or inhibit DE&I.
The problem is the historical data they have been fed as part of the machine education process.
The risk of unconscious bias is most pressing when we consider the datasets and algorithmic features selected by the very people designing and educating these algorithms.
Humans have the ability to make decisions of what to believe, but AI does not.
As a result, in the training and programming stage, AI is at the mercy of its creator and their own inherent views, experiences, and personal filters.
It’s not that alternative points of view aren’t readily available – far from it. People tend to allow an algorithm to pick sources of information which resonate with what they already believe.
For example, in the HR field, this could mean that AI recognizes that it was typically men of a certain age who held specific positions within a company.
If a new manager is sought, we have an unintentional data bias in which that same group of men are seen as – statistically – far more likely to succeed in these roles.
Other qualified candidates can then find themselves filtered out of the application process and removed from the running entirely.
Without the right safeguards in place, AI could keep that company in the dark ages of diversity.
Even if we filter out gender and age from the raw data fed to the AI algorithm, the model may find that gaps in employment, or inferred information which reveals age or gender indirectly can result in the continuation of the bias that was present in the raw data.
The key thing with AI is that it doesn’t make judgments – it can’t. It does what you ask it to. Nothing more, nothing less.
The only way to detect and stop any unintentional bias is to ensure you have diversity of thought and the ability to see different problems through different perspectives in the teams and the human-intelligence training the AI.
Big data needs context and human intelligence when applied to recruitment. There are often variables at play that only humans can interpret and understand.
The difference between success and failure rests on addressing and removing these conscious and unconscious biases in the data, and also ensuring that the approach will lead to fair and just outcomes.
To be successful, businesses need to ensure that more people are AI literate and can contribute to the process.
The more a diverse range of employees are able to “speak data” and approach problem solving with unique points of view, the greater the chance to build fairer, more equitable AI applications for the future.
The journey starts with your people
AI-driven by data science can have an intense impact - finding innovative new solutions to old problems by analyzing data and looking for transformative patterns that can help catapult a business to success.
At best, and used correctly, AI can help mitigate bias, diversify talent pools, and benchmark diversity.
To reach that goal, however, these programs need not only the right environment but also trustworthy data.
A bias-proof data strategy requires investing in the right technologies and people to lay the foundations. More importantly, it requires a diverse group of humans at the center of the strategy to ensure AI development is undertaken in partnership with humans.
Diversity, equity, inclusion and belonging are key to helping businesses thrive in increasingly data-rich environments and data literacy is one of the most powerful tools in developing the next generation of data science and AI talent.
Data seeks to understand and influence our businesses, consumers, and society; it is imperative to have a pool of data-literate talent that reflects that society.
It is not enough to merely have diverse viewpoints – businesses need diversity in their decision-making, too.
Only by making the data and decisions understandable by a broad audience - made up of diverse teams that incorporate AI experts, data scientists and line-of-business analysts - will you be able to develop a more effective approach to AI development.
A business developed on a foundation of multiple, diverse viewpoints is more prepared to thrive in today’s hyperglobal environment.
Ultimately, businesses need to develop a culture that not just allows for the unique differences between people, but celebrates them, and utilizes these differences for more effective and impactful decision-making.
Through these strategies, businesses can effectively mitigate the risk of bias in AI, along with pursuing new, valuable, business strategy.
Alan Jacobson is the chief data and analytics officer at Alteryx. In his role, Alan leads the company’s data science practice and is responsible for data management and governance, product and internal data and use of the Alteryx Platform to drive continued growth.