Responsible AI Needs Radical Transparency

Ciarán Daly

August 17, 2018

7 Min Read

By Ciarán Daly

LONDON—It was intended to provide “the most intense public scrutiny” since Microsoft’s hearing in the late 1990s. Mark Zuckerberg was to be dragged in front of a Congressional hearing and, with the world watching, be forced to answer to the public following the devastating Cambridge Analytica revelations.

It didn’t quite go this way. Many believe Zuckerberg got off lightly. The questions he faced demonstrated a complete ignorance on the part of senior U.S. public officials towards even basic concepts surrounding user data and technology. If this was the height of public scrutiny of technology companies, it betrayed a complete inability of the state to actually hold them to account.

What the Facebook scandal demonstrates is that, in lieu of regulatory efficacy, tech companies will fall back onto what they know best to address user concerns—technical solutions. Facebook have been able to introduce tools which, while not wholly mitigating user concerns, are able to at least provide them with the ability to see exactly how much of your data is stored and how it is being used.

From GDPR to judicial hearings, the problem of transparency and accountability around data affects nearly every business today. When the algorithms you’re using are inherently opaque, however, it’s not so easy to apply a technical band-aid.

Nowhere is this truer than AI companies, many of whom deploy ‘black box’ deep-learning solutions to generate results for their customers. Unlabelled deep learning outcomes are largely inexplicable, given the sheer scale of the data used for an AI to reach a decision. A deep-learning algorithm can’t be dragged in front of a Congressional hearing, while introducing explainability into the algorithm is no mean feat.

Beyond the algorithms

Beyond algorithms, though, transparency is fundamentally a human issue. Personal data belongs to, well, people. Conversely, the human impact of poor design demonstrates that there are people behind the interface—and those people must be held accountable. Every algorithm has a designer, and those designers have names and addresses.

Enterprises may not have reached a stage where AI has forced a widespread cultural or organisational shift internally. However, for the few companies that are deploying AI technologies at scale, an organisational mechanism for overseeing their platforms can wholly strengthen their claims to accountability.

Take DeepMind Health, the health and pharma wing of the startup famously snapped up by Google in 2014. With Alphabet and Google lying at the heart of the ‘right to be forgotten’ and, later, GDPR debates in Europe, DeepMind naturally faced lots and lots of questions from the get-go about how their solutions would be used by Google.

Once DM Health announced its partnership with the NHS Royal Free London Hospital back in 2015 and gained access to the personal data and health records of 1.5 million patients, the firm anticipated that lots of questions would be asked—about their aims, about the power of Alphabet and Google, and about the ethics of a technology company gaining access en masse to personal health data.

AI calls for radical transparency

DeepMind’s response? To turn the microscope in on themselves. On the day that DM Health launched, DeepMind adopted what Cambridge Fellow and former MP Julian Huppert describes as a ‘radical transparency approach’.

Nine reviewers, including Huppert, were invited to launch the DeepMind Health Independent Review, which aims to scrutinise, audit, and investigate all of DeepMind Health’s work. The rotating membership of the board is made up of a number of different public figures, from the Chief Executive of the Royal Society of Arts to Medical Professors at UCL. Their sole aim is to provide oversight in the public interest across all of DM Health’s activities.

“There’s a fundamental question facing any technology company now and anybody working in healthcare: how do you demonstrate to the public that you are trustworthy?” Huppert says. “The fiasco of Cambridge Analytica shows what happens when people don’t understand the very real concerns others have about what happens to their personal data. I think all companies should look at doing something like this.”

Part of the reason this approach is so bold, explains Huppert, is that the Review board have ‘completely free range’ to investigate what the company are doing. They aren’t limited by any confidentiality agreement; they’re able to speak out publicly about its work; as well as this, they have an annual budget with which to supplement their investigation.

Significantly, the Review panel produces an annual report on their findings, which serves to open up DeepMind to public scrutiny. Given the reach of DM’s work, the report focuses in particular on public trust and the use of health data, with an eye to examining the effectiveness of DM Health’s initiatives already in place to address these areas.

Responsible AI begins with transparent data practices

In many ways, DMH’s commitment to radical transparency goes beyond the Independent Review. One example of this is DMH’s announcement of an auditing system for health data back in March 2017. The system would record and verify every event related to hospital health data. Using a technique called ‘immutable logging’, it would prevent the misuse of data and provide clear, direct reassurance to patients and regulators about how data was actually being managed. Similarly, they launched a hospital data simulation with which to test new systems.

To monitor the efficacy and reliability of these initiatives, the Independent Review commissioned independent legal advice from firm Mills & Reeve to analyse DMH’s data-sharing arrangements. As the report outlines, the key questions were regarding whether:

  • DMH was functioning as a data controller or data processor

  • Whether DMH had breached the Data Protection Act and what the implications could be for the firm if that were the case

  • Whether the redactions in DMH contracts were appropriate

  • Whether DMH had breached patient and public confidence

With DMH providing full access to redacted documents, the Review was able to investigate and provide a series of commendations, concerns, and recommendations across their year’s work.

Noting a lack of public engagement, the Review further recommended that the firm develop a clear strategy & principles moving forward for effective public engagement, and launch education programmes about AI to assuage public fears.

What are you trying to hide?

While the Review ultimately only found a number of ‘minor vulnerabilities’, Huppert is clear that it is the overall approach which should prove worthwhile to any company utilising AI or processing sensitive data: “To me, one of the key questions is, if a company isn’t prepared to expose itself like this to an open enquiry because there is the risk that we find problems and criticisms, what is it trying to hide?”.

Working with a sector which is still yet to be fully digitised, the conversations around DeepMind’s use of data have clear implications moving forward for implementing AI successfully in the health sector. As a case study for all enterprises, it demonstrates clearly that third-party scrutiny of data practices is necessary to pave the way for the safe & transparent deployment of AI technology in real-world settings.

“Throughout all of this, you have to make sure you protect patient confidentiality, that you have a protected marketplace, and that you do things in the right way,” Huppert argues. “DeepMind Health have not been perfect – no company ever is – but the fact that they have been prepared to open themselves up to expert scrutiny and independent oversight shows how confident they are that they’re trying to do the right thing. That is the model I would love to see more companies adopt.”

headshot-e1522230330760.jpg

 

Based in London, Ciarán Daly is the Editor-in-Chief of AIBusiness.com, covering the critical issues, debates, and real-world use cases surrounding artificial intelligence - for executives, technologists, and enthusiasts alike. Reach him via email here.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like