DeepMind’s chief business officer talks about responsible AI
When Google-owned DeepMind began working with the Royal Free London NHS Foundation Trust in 2015, it was doubtful the company expected to make headlines for the wrong reasons.
DeepMind was brought in to develop an app wrapper for an NHS algorithm to alert clinicians to the early signs of acute kidney injury.
But in 2017, the U.K.’s data watchdog, the Information Commissioner’s Office (ICO), would go on to find that the Royal Free Trust breached data protection laws through its work with DeepMind.
Despite the trust’s sanction, DeepMind avoided trouble as Royal Free was held responsible for sending the company patient data.
With a greater focus today on responsible and ethical deployments of AI, it’s important for companies like DeepMind to utilize “exceptional care" to unlock the "exceptional promise” technology can bring, according to DeepMind’s chief business officer, Colin Murdoch.
Speaking at the AI Summit London, Murdoch stressed the need to think long and hard about being responsible – and that responsibly deploying AI is too big and important to handle without taking the needed time.
“What’s important is making sure when we start projects, that we’re able to review what we’re doing and can go through step by step and ensure we’ve got the right eyes on the work,” he said when asked about lessons learned from the Royal Free case.
Murdoch highlighted the company’s decision to bring in external experts to review AlphaFold, its deep-learning neural network that can accurately determine a protein’s 3D shape.
He also referenced DeepMind’s decision to invest in scholarships and sponsors for groups like Women in ML, saying, “it takes a strong community to enable an ethical community."
Should companies apply ethical approaches to developing and deploying AI, bigger technological breakthroughs will be on the horizon, Murdoch added.
“AI isn’t hype, it has the potential to change the lives of billions – and it’s happening today.”