WAICF ’23: ChatGPT Needs ‘Bias Bounties’
Eminent computer scientist Michael Kearns takes an idea from the OS world
At a Glance
- UPenn’s Michael Kearns says ChatGPT needs ‘bias bounties,’ which rewards developers for finding bias.
- Ikea’s global vice president for responsible AI warns against putting company information into ChatGPT.
- ChatGPT was the talk of the conference.
ChatGPT was arguably the biggest talking point at the World Artificial Intelligence Cannes Festival (WAICF). But some experts warned about potential negative impacts of OpenAI’s popular conversational AI chatbot.
Michael Kearns, a distinguished computer scientist and professor in the Computer and Information Science department at the University of Pennsylvania, warned that achieving fairness in unsupervised and complex machine learning models, like ChatGPT, could prove to be a problem and risks perpetuating bias in the text it generates.
Kearns proposed a concept to potentially alleviate this issue: bias bounties.
“In simple terms, it is the idea of crowdsourcing the identification of bias so that you get the widest talent pool and the widest perspectives on the bias,” he said.
That means “you would open up your trade model via an API to communities, and reward them for finding biases in your model, which might be very rich or subtle biases. They may not even correspond to simple demographic groups.”
Kearns explained that the idea of bias bounties comes from the operating system world, where developers would begin bug bounties and invite communities to try and find issues and circumventions to their OS.
Kearns, who is also an Amazon Scholar, was joined in reflecting on ChatGPT’s metric rise by Diya Wynn, senior practice manager for responsible AI, emerging technologies and intelligent platforms at AWS.
Wynn said ChatGPT acts as a reminder of the importance of responsibility at the beginning of creating a product and during design.
Right: Diya Wynn, AWS senior practice manager for responsible AI, emerging technologies and intelligent platforms
"The ChatGPT conversation is making the case for those of us that are concerned about fairness and responsibility and trustworthy AI. This is a great example of why we need to establish practices in the beginning and make that core to how we build, design and deploy and use artificial intelligence, machine learning and new emerging technologies as well.”
In another talk on how to build trustworthiness in AI, Nozha Boujemaa, Ikea’s global vice president for digital ethics and responsible AI, warned of the possible risks that could arise from using tools like ChatGPT daily.
Boujemaa warned against putting company information or sensitive data into chatbots like ChatGPT, as well as personal data.
She said there are opportunities to use model-based tools to gain insights and Ikea was investing in large language models long before ChatGPT was announced to “better understand latent customer needs.”
But since Ikea is developing its own solutions, Boujemaa said the company can control the tech’s boundaries, something not currently possible if it deploying an existing solution like a ChatGPT.
“ChatGPT is its own full universe. There is no control over what's going on and the number of data of users that will limit the use of such models in a given business context,” she said.
About the Author
You May Also Like