Female Leadership in the Age of AI

Organizations and teams building and training AI systems need to reflect a diverse society

Sara Portell, Vice president of user experience at Unit4

August 8, 2024

6 Min Read
Three women managers collaborating

A recruiting tool that dismisses women applicants for technical jobs; a re-offending prediction system that is far more likely to identify Black defendants as a potential risk than their white counterparts; facial recognition technology that has a 99% accuracy rate with white male faces, but only 65% for the faces of Black women; and a hiring system that automatically throws out the CVs of women over 55 and men over 60. All these are real-world examples of the bias built into certain AI systems.

The biases are present in these AI applications not because the software itself is inherently sexist, racist or ageist. AI systems learn from data that often contains historical and societal biases, which they can inadvertently replicate. When algorithms are trained on non-representative datasets, how can they reflect the needs of all populations and groups?

Consequences of Biased AI

Biased AI systems lead to multiple unwanted or unintended problems. One blunt consequence is the prospect of hefty fines. In the case of the ageist hiring system outlined above, the company ended up paying $365,000 to settle a lawsuit.

If organizations are basing important decisions around hiring, promotions and performance evaluations on AI systems that are unfair and inaccurate, this can perpetuate inequalities and discrimination. Conversely, unbiased AI systems ensure better decision-making because they rely on relevant and accurate data that reflects diverse perspectives, ensures adequate representation, maintains quality, balances different groups and addresses any existing biases or errors.

Related:The Future of AI is Open and You Can Build it Yourself

We live in a globalized economy. Companies have employees situated all around the world, creating very diverse workforces. If AI systems being used by global businesses have bias built in, they won’t be able to represent all those employees’ diverse needs, perspectives and capabilities. 

Trust is vital here. How can you trust and adopt an AI system if you perceive it as not fair? The future of work is going to be humans working alongside AI. This is going to be difficult to achieve or manage if the workforce has a negative attitude towards the technology due to ongoing examples of bias and if it doesn’t work transparently.

Aside from potential fines and distrust among users, regulations are arriving that require organizations to apply AI fairly. The EU AI Act and GDPR include rules regarding AI safety, transparency, fairness and accuracy that companies must follow to comply with the law.  

Under these rules, organizations have a moral responsibility to prevent AI systems from causing harm, which would be the case if they discriminate against any group.

Related:What’s Next for AI Video Generation

Diverse Development Teams Lead to Fair AI

To ensure diversity is at the core of AI systems, the organizations and teams building and training the systems need to reflect our diverse society. When homogeneous groups work on an application, they are likely to miss the specific needs of a diverse target audience, leading to products that do not effectively serve all the intended user groups. This is sadly the case today, where most AI developers are men.

As a group that represents 50% of the population, having women so poorly represented in the design and development of such important technology is worrying. AI is going to help us transform and reimagine aspects of society and shape future organizations. With only 22% of AI workers as women, we are missing an entire perspective of different experiences, different needs and different views.

This is where it is critical to have more women in AI leadership roles. They can bring unique perspectives and experiences and play a crucial role in advocating for practices that prioritize social responsibility and data privacy.

With women in key roles, it is more likely teams working on AI technology will recognize and correct any biases during the design and development phases, thanks to their different perspectives. This will result in datasets and testing protocols that aim for inclusivity and fairness.

AI technologies must perform equally well across all demographics within the target audience of the intended application to prevent the perpetuation of exclusion and discrimination. Women leaders are at the forefront of ensuring this happens.

Many businesses are aware of this. According to recent research, 73% of EMEA business leaders believe that increased female leadership in the sector is important for mitigating gender bias in AI, while 74% view it as important for ensuring the economic benefits of AI are equally felt in society.

The underrepresentation of women in AI risks limiting the scope of problems that AI is developed to solve. With more women in leadership, AI development can be steered toward solving a broader range of societal issues, reflecting the diversity of challenges different populations face.

Increasing women's participation in AI leadership is not just about preventing bias, but also about embracing the full potential of innovation. Diversity of thought leads to more innovative and effective AI solutions that can meet a more comprehensive array of societal needs and challenges.

Inclusive AI

Having women at all points in the AI design and development process and decision-making roles will ensure inclusive, fair systems. Take a company using machine learning for its hiring process. This system is built from historical data reflecting a male-dominated workforce. It discards women's CVs due to gendered language differences as it favors assertive language typically used by men over collaborative language more often used by women and it writes job descriptions with male-oriented terms like “aggressive” or “dominant.” The result is a corporate hiring system that perpetuates gender bias.

Similarly, with performance management systems AI might favor patterns associated with male performance and give a lower score to women balancing caregiving responsibilities against their availability for overtime or high-visibility projects.

With women working on applications like hiring and performance management systems, they’ll be able to identify and mitigate this type of bias, ensuring inclusive data collection and gender-neutral language. 

The more diverse the group is building AI systems, the broader the application will be, as it will consider specific needs from a multitude of perspectives. Leaving women out of the AI development process results in applications targeting specific problems and specific areas for a specific group, but inadequate for the wider group.

Having diversity on development teams helps to ensure that various perspectives are considered in the development and testing process, leading to more inclusive and comprehensive protocols. The outcome is AI applications that meet the unique requirements, perspectives and expectations of a company’s entire user base.
Success here is not just appointing one woman to lead a team of men from the same demographic. To remove bias from AI systems, development teams need to include a wide range of diverse talent, from different social and ethnic groups. But having more women in AI leadership roles will have a trickle-down effect, encouraging more diverse talent to join and stay in the industry. This will result in AI technology that is truly inclusive, meeting the needs of all current and potential users and, hopefully, AI discriminating against certain groups will be a thing of the past.

About the Author

Sara Portell

Vice president of user experience at Unit4, Unit4

Sara Portell is vice president of user experience at Unit4. Unit4 is a European headquartered SaaS provider and Sara is a key member of the company’s AI Steering Committee, which is responsible for overseeing how AI is integrated into the company’s internal processes as well as the software delivered to customers.

Sara has an academic background in ethnography, which has informed her approach to improving the way users interact with technology and it has become particularly important as she looks to ensure AI reflects the needs of all Unit4’s employees and customers. It is why she makes the argument that female leadership is critical if AI is to be deployed effectively as she understands how important diversity is to ensure the AI models used and how AI is implemented are done ethically and without bias.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like