AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI for Good

AI Summit London: AI experts share best responsibility practices

Article ImageRegulatory and consumer considerations among ideas proposed by Google, and Shell.

Computer vision is one of the core uses of AI. It allows systems to determine information from videos or images to provide users with insights.

It is used in everything from autonomous vehicles to Industry 4.0. For example, multinational energy giant Shell uses computer vision to monitor its structures.

Speaking on a panel at the AI Summit London, Amjad Chaudry, Shell’s capability center manager for data science and ML, said that his company uses computer vision to guide robots that navigate plants to monitor gauges.

Google and the Bank of England were among the other brands represented on the panel, which discussed future-proofing computer vision implementations and offering best practices to attendees.

Chaudry said the integral needs for his business are to embed security across a project and keep an eye on the ever-changing regulatory landscape.

Joining Chaudry on the panel, Arunita Roy, senior data and ML scientist at the Bank of England, said her team looks at other regulators to see what they’re doing and how they’re incorporating AI.

Impact: Safety and society

Also on the panel was Toju Duke, program manager for responsible AI at Google. Duke spoke about the need to ensure that the impact products that use AI and ML may have on society is not harmful.

She referenced one example of an action Google took to ensure safety that was related to facial recognition. Google users can turn off Face Grouping, its facial recognition software, in apps like Google Photos.

Duke said it is important to try and get things right, but that businesses cannot get it right all the time. She said being responsible needs transparency, explainability, maintaining data quality, and using tools and fairness metrics.

“We should be trying to be responsible not just because of regulation,” she said.

Eitan Anzenberg, chief data scientist at, supported Duke’s point, saying businesses should not wait for governments to do something and that it's up to brands to "do the right thing."

Describing privacy considerations as a “complex issue,” Anzenberg said that he often tries to take consumers’ perspectives when building models, asking himself how they might feel about what he’s doing end to end.

“Personal responsibility is very important,” he concluded.

Trending Stories
All Upcoming Events

Upcoming Webinars

More Webinars

Latest Videos

More videos


More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

AI for Everything Series

Oge Marques explaining recent developments in AI for Radiology

Author of the forthcoming book, AI for Radiology

AI Knowledge Hub

Newsletter Sign Up

Sign Up