Addressing biases in AI: An ongoing process

Diversity within AI teams should be the first major step towards smashing algorithmic biases

November 8, 2021

4 Min Read

Diversity within AI teams should be the first major step towards smashing algorithmic biases

In 2015, Jacky Alcine, a web developer in Brooklyn, noticed that Google Photos had introduced a new automatic tagging feature: all his photos came up with tags like ‘bikes’ or ‘planes’ when those objects were present in the images.

On coming across photographs of himself and his friend, Alcine (shockingly) noticed that Google Photos had generated tags that said "Gorillas"!

Alcine and his friend are both African-American, and Google had managed to zero in on one of the most racial epithets that exist and labeled the two with the same. 

This mislabeling of people based on their race is not something that only Google is guilty of.

Joy Buolamwini, a researcher and coder, faced discrimination straight from a machine multiple times.

As an undergraduate student at Georgia Institute of Technology, facial recognition systems would work on her white classmates but would fail to recognize Joy’s face.

She dismissed it as a flaw and was sure that this would be solved soon.

However, she encountered the same bias a few years later again, this time at MIT’s Media Lab.

The facial analysis software that Buolamwini was using for her project failed to detect her face again, while it detected the faces of Buolamwini’s colleagues, who had a lighter skin color.

Buolamwini had to complete her research wearing a white mask over her face in order for it to be detected by the software.

Joy went on to complete her MIT thesis on the topic of 'Gender Shades', where she examined the facial recognition systems used by IBM, Amazon, and Microsoft and discovered the biases that they promote.

These biases are a result of the kind of datasets that most machine learning models are trained on.

When it comes down to it, AI is only as intelligent or aware as the people training and creating the technology.

Many open source datasets consist heavily of Caucasian, male faces; the algorithms that are developed on top of these datasets usually have inherent biases where the results of the underrepresented group are significantly inconsistent, wrong, and often offensive.

The models can accurately detect faces when it comes to white people but fail when they encounter faces of people from other races.

This is an indirect result of the workplaces being dominated by male, white engineers who fail to see the problem with such datasets.

As a first step to mitigating these biases, the datasets that machine learning models are trained on need to be more balanced and inclusive when it comes to representing different races.

If machine learning models are trained on datasets that consist of a significantly higher percentage of white faces, there is bound to be a heavy bias when these models are then deployed on unseen images.

The models definitely fail to detect different races; furthermore, when it comes to detecting the age of people in images, models trained on the aforementioned data-sets perform fairly well when it comes to white faces but fail miserably when it comes to that of other races simply because they were not trained with the appropriate data, to begin with.

While it is quite a challenging task to build models that are perfect without any bias at all, moving forward it is important to consistently revisit and reevaluate models to ensure they are as accurate as possible.

This is an ongoing process, with the necessary inputs and feedback coming from not only data scientists and AI experts in the team, but also from team members in the other departments such as marketing within an organization.

The core values of a company are particularly important when it comes to matters such as this with values that focus on inclusivity in every aspect of the workplace being imperative.

Creating a diverse team at any company is the first step; at tech companies, the second significant step is to ensure that the recommendations and suggestions made by each team member is noted and referred to while creating new machine learning models.

By taking into account different opinions and perspectives, the aim to create technology that is sensitive to any possible mislabeling of visual data will soon become more achievable.

Trisha Mandal manages all things related to content and communications at Mobius Labs. Equipped with an MA degree from Humboldt Universität, Berlin, Trisha works the fine line of presenting technical content with a creative flair. She is also the proud owner of Sabina, the black ukulele, and her old film camera (still waiting to be named).

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like