December 7, 2022
Working in tandem with internal teams and scrutinizing practices are among the ways to mitigate biases in data sets, speakers at the AI Summit New York said.
Speaking at the AI Summit New York, Estée Lauder’s Sowmya Gottipati stressed the importance of working alongside legal and privacy teams to avoid potential issues.
Referencing her experiences, Gottipati said when launching a fragrance-focused application in China, she worked with 12 lawyers to get the project launched.
Gottipati did say there can oftentimes be frustrations from the development side around deployment delays brought on by extra layers of scrutiny – but emphasized the importance of being vigilant.
She stressed the need to apply the same level of vigilance when approaching a project internally as well as when working with third parties.
“You really have to scrutinize,” said Gottipatti, saying brands should be asking questions about what they're doing with the data they collect, for example.
The VP for global supply chain technologies spoke about the need to mitigate issues around bias, especially for her sector of cosmetics.
“We do not want to be stereotypical about pigmentation or associate the wrong ethnicity with the wrong product,” she said.
“People try to do the right thing but unconsciously, they may not understand what kind of bias is introduced into the dataset. It's extremely important to consider the disparate data sets.”
Speaking on the panel alongside Gottipatti was Matthew Quint, director of the Center on Global Brand Leadership at Columbia Business School.
He agreed with Gottipatti’s point on data biases, saying if brands don’t deploy inclusive data, their models will fail.
Quint said brands are working on developing applications and tools that take into account ethical considerations more but warned it’s still early days as businesses continue to experiment.
About the Author(s)
You May Also Like