EY, Unilever, Ocado Share Strategies for Ethical AI Deployments
The companies are enhancing governance and fostering cross-functional collaboration to ensure responsible AI deployments following the EU AI Act
At the AI Summit London, experts from EY, Unilever and Ocado discussed the implications of the recently passed EU AI Act, which mandates safe and responsible AI deployments.
Sofia Ihsan, EY’s responsible AI consulting leader for the U.K. and Ireland, emphasized that businesses should maintain an inventory of their AI use cases across the organization to ensure compliance and ethical implementation.
“You can’t govern what you can’t know about,” said Ihsan, as she instructed attendees to maintain an inventory of their AI use cases from across the organization.
Ihsan suggested AI teams should communicate the foundational elements to other parts of the businesses to help them understand AI, explain what AI is and work out how your businesses define it.
The EY consultant said managing AI risks also need to be brought into the fold when considering other areas like working with third parties and cybersecurity – “AI doesn’t exist in a vacuum.”
Also on the panel was Myrna Macgregor, a principal data strategist and lead for responsible AI and robotics at Ocado Technology.
She said that businesses wanting to deploy AI responsibly should employ more support and “muscle” around monitoring risks.
“What I found really works is issuing a call to sort of action and all the volunteers of people from across the business get involved in [the responsible AI] process,” Macgregor said. “Then you find lots of interesting people with ideas coming out of the woodwork that you might not have identified in any of your stakeholder marketing.”
Macgregor explained that her team decided to meet the requirements of what she described as the highest regulatory level for responsible AI, the EU AI Act, which places AI systems into categories based on their risk levels.
She said her team is working to understand the compliance issues for their technology for the Act, noting that its language is “quite general.”
Ocado Technology’s generative AI usage could require the company to take upon certain disclosures around transparency, Macgregor revealed, adding some use cases could be potentially high risk and that they’re working to understand the impact such categorizations might have.
Monika Robeva, Unilever’s AI and privacy governance manager, also spoke on the panel.
She described responsible AI considerations as the “ultimate cross business exercise” – requiring input from teams including legal, privacy, employment law, cybersecurity and procurement, as well as the technology teams.
Robeva suggested that sustainability teams are a potentially overlooked business group that could be involved, saying, “It really is quite a varied stakeholder map. There’s a bunch of people that really need to get involved in this and feel like they have a stake in it.”
Read more about:
AI Summit London 2024About the Author
You May Also Like