DEI in AI: With Great Potential Comes Great Responsibility

An interview with Helen Kelisky, managing director of Google Cloud UK and Ireland

Deborah Yao, Editor

February 16, 2024

6 Min Read
Collage of human profiles in various colors
Getty Images

One of the most talked about risks of AI deployment is its potential to make biased predictions or generate discriminatory outcomes. As such, diversity within AI teams is a matter of social responsibility, said Helen Kelisky, managing director of Google Cloud U.K. and Ireland. And as excited as people and companies are about this technology, "with great potential comes great responsibility," she said.

The following is a transcript of our full interview with Kelisky on Google's DEI efforts, which was conducted through email.

Why is it important for AI companies to embrace diversity in their work culture?

It is not only important, it is essential. A work culture that embraces diversity is the only way to build AI that is both responsible and ethical. By underpinning AI development with principles of diversity, equity and inclusion, AI companies are able to create more inclusive AI tools that better represent the diverse community that will be using their technology.

Essentially, the more perspectives that AI companies are able to integrate into their teams, the stronger their products will be. Investing in the growth of a diverse workforce will therefore enable companies to have greater impact, on a much wider scale.

Our work at Google Cloud has always been focused on technology that empowers rather than excludes, so it is important that DEI principles continue to play a pivotal role in our cloud company culture.

Related:AI and the Risk of Technological Colonialism

What are the risks of not having a diverse workspace when it comes to AI development?

AI tools can only be as ‘good’ as the data they are trained on. This means that everything from AI’s facial recognition features, to its speech and language capabilities are limited by the demographic makeup of the technical team. If the data used to train AI models lends itself to biases based on race, gender, ability, etc., then these biases will be inherited by the AI model, and will be further reinforced by use. The more limited the team is, the more limited the technology will be.

Therefore, diversity within AI teams should be prioritized as a matter of social responsibility. The risks and implications of AI development that neglects diversity are not only limiting, they are dangerous, as they perpetuate issues of discrimination and injustice. In the U.S., for example, reports have shown that Black people experience twice the amount of errors as white people when using Automated Speech Recognition technology (ASR). Similarly, AI image recognition tools have been reported to incorrectly report Asian people as blinking in photographs.

Related:How AI Helps Diversify Pool of Candidates for Clinical Trials

There is no doubt that AI will continue to transform industries. It will redefine how we work, create and interact with each other. But with great potential comes great responsibility. The more we rely on this technology to drive business and deliver information, the more that DEI becomes an urgent priority. This is why the human element of AI will always be crucial. Responsible artificial intelligence only happens alongside human intelligence.

How does Google Cloud UKI approach DEI practically in day-to-day operations and what tangible impact has this had on the business?

At Google Cloud, we understand that building AI solutions that are equally bold and responsible starts with a strong and diverse work culture. At the core of this culture is a willingness to have disruptive and reflective conversations. When it comes to our recruitment process, we are constantly reevaluating interview techniques and expanding our flexible working options.

These proactive changes help to attract fresh talent to the Google Cloud team and create more equal opportunities for underrepresented groups. What's more, by creating these opportunities, we are opening up our team to more diverse perspectives, and becoming a stronger unit as a result.

What role does training have in improving DEI in AI companies? Can you talk about any training programs Google has on DEI?

Training and development programs are an essential element of DEI as they encourage us to challenge our own bias and limitations. In such a fast-paced industry, we must never stop learning about ways we can better understand our customers, and each other.

Within our team, we have a growing portfolio of education and development programs, including training in subconscious biases. After all, DEI is not just about opening up opportunities to external groups, but also investing in the growth and development of existing employees. It is about cultivating a work environment that continues to challenge and inspire employees long into their careers. To drive continued learning at Google Cloud, even in the most senior positions, we established the #ItsUpToMe campaign, which encourages managers to take a more active role in diversity and inclusion for the workplace. Similarly, our AI Principles Ethics Fellowship saw 16 Google executives across 10 product areas engage in a tailored leadership program on responsible AI development, designed to help them become better advocates for diversity and inclusion in their own communities.

As a female leader in tech, training and development programs also form a key part of my role. Being actively involved in initiatives such as Women in Telecoms and Technology (WiTT) has helped me to define the type of leader I want to be. Supported by this community, I am able to help empower the voices of other women in tech, and advocate for more seats at the executive table.

The technology industry definitely has more work to do in this department, but having worked in this industry for more than 30 years, I am witnessing more women excelling in tech now than ever before. Championing equality and diversity in the tech industry is a part of my job that I am incredibly proud of, on both a personal level, and a professional one.

How does Google Cloud engage with outside diverse communities and how does this impact your DEI efforts?

At a cultural level, it is vital that AI companies remain open to novel ideas, and approach AI with a balanced sense of responsibility and curiosity. One way that we have taken on this responsibility at Google Cloud is through our commitment to racial equity. We have already achieved our 2025 goal of a 30% increase in the number of Black, Latino and Native Americans in leadership positions at Google, and we are continually gearing up for further growth.

Google is also proud to have the highest-ever representation of women in tech, non-tech and leadership roles globally and in the U.S. In addition to these milestones and commitments, we are always learning from our customers, and making sure that we are investing in the two-way exchange of ideas and inspiration.

Looking ahead, how can companies integrate DEI principles into their AI development strategy?

DEI principles play an essential role in the safe and responsible deployment of AI technology. In order for AI to reach its true potential, it is important that no one is excluded from the conversation. Therefore, the responsibility lies with companies to radicalize their approach to DEI strategy.

Leaders of the industry should focus their attention on opening up the recruitment process, challenging internal dialogues, and championing diverse perspectives within existing teams. Essentially, a diverse work culture is the key to developing AI that is engineered for ethical deployment.

Whilst 2023 represented an exploratory phase of AI technology, 2024 will be focused on refining AI to be more inclusive and transparent. Building technology that inspires positive progress is the turning point that will drive the AI revolution forward.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like