EY, Unilever, Ocado Share Strategies for Ethical AI Deployments

The companies are enhancing governance and fostering cross-functional collaboration to ensure responsible AI deployments following the EU AI Act

Ben Wodecki, Jr. Editor

June 14, 2024

2 Min Read
Ben Wodecki

At the AI Summit London, experts from EY, Unilever and Ocado discussed the implications of the recently passed EU AI Act, which mandates safe and responsible AI deployments.

Sofia Ihsan, EY’s responsible AI consulting leader for the U.K. and Ireland, emphasized that businesses should maintain an inventory of their AI use cases across the organization to ensure compliance and ethical implementation.

“You can’t govern what you can’t know about,” said Ihsan, as she instructed attendees to maintain an inventory of their AI use cases from across the organization.

Ihsan suggested AI teams should communicate the foundational elements to other parts of the businesses to help them understand AI, explain what AI is and work out how your businesses define it.

The EY consultant said managing AI risks also need to be brought into the fold when considering other areas like working with third parties and cybersecurity – “AI doesn’t exist in a vacuum.”

Also on the panel was Myrna Macgregor, a principal data strategist and lead for responsible AI and robotics at Ocado Technology.

She said that businesses wanting to deploy AI responsibly should employ more support and “muscle” around monitoring risks.

“What I found really works is issuing a call to sort of action and all the volunteers of people from across the business get involved in [the responsible AI] process,” Macgregor said. “Then you find lots of interesting people with ideas coming out of the woodwork that you might not have identified in any of your stakeholder marketing.”

Related:Robots as a Data Source: Obtaining Insights From Robotic Autonomy

Macgregor explained that her team decided to meet the requirements of what she described as the highest regulatory level for responsible AI, the EU AI Act, which places AI systems into categories based on their risk levels.

She said her team is working to understand the compliance issues for their technology for the Act, noting that its language is “quite general.”

Ocado Technology’s generative AI usage could require the company to take upon certain disclosures around transparency, Macgregor revealed, adding some use cases could be potentially high risk and that they’re working to understand the impact such categorizations might have.

Monika Robeva, Unilever’s AI and privacy governance manager, also spoke on the panel.

She described responsible AI considerations as the “ultimate cross business exercise” – requiring input from teams including legal, privacy, employment law, cybersecurity and procurement, as well as the technology teams.

Robeva suggested that sustainability teams are a potentially overlooked business group that could be involved, saying, “It really is quite a varied stakeholder map. There’s a bunch of people that really need to get involved in this and feel like they have a stake in it.”

Related:Industry Experts Discuss Big Data Challenges: Security, Privacy, Quality

Read more about:

AI Summit London 2024

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like