LONDON, UK – The British government will honour many of the recommendations made by the Lords AI Report earlier this year, its official response to the report reveals.
Following the publication of the Lords AI Report, ‘AI In The UK: Ready, Willing, And Able?‘, back in April of this year, the UK Government have released their official response in full – as is customary with the work of all parliamentary select committees. The report, chaired by Lord Clement-Jones and Dame Wendy Hall, made the case for the UK to take a global lead on establishing standards for the ethical and responsible application of AI.
Signed by the Rt Hon Greg Clark MP, Secretary of State for Business, Energy, and Industrial Strategy, the government’s response crosses departmental lines to provide what can be described as a supportive reception to the report. The response covers a vast range of issues around AI, from ethics and accountability to data transparency and public trust, and is supported by the founding of a number of AI-specific public bodies. Notably, the response outlines the government’s commitment to an ‘artificial intelligence sector deal’ as part of its wider Industrial Strategy.
“To successfully address the Grand Challenge on AI and Data outlined in the Industrial Strategy whitepaper, it is critical that trust is engendered through the actions Government takes and the institutions it creates,” the statement says. Through the Digital Charter, the future Centre for Data Ethics & Innovation, as well as the newly-founded Office for AI and the AI Council, the government claim it will work towards a ‘more constructive narrative around AI’ while ensuring that ‘governance measures are aligned and respond to public concerns around data-driven technologies, and address businesses’ needs for greater clarity and certainty around data use.”
A hands-off approach: businesses to take the lead on public trust
Despite initiatives to build a strong advisory foundation for AI in the UK, the government’s response falls largely in line with a hands-off approach to regulating the technologies. It clearly pushes for widespread public education, arguing that people must be made aware of how and when AI is being used to make decisions about them. “This clarity, and greater digital understanding, will help the public experience the advantages of AI, as well as opt out of using such products should they have concerns.”
The government goes on to argue, however, that it is ultimately industry which should ‘take the lead’ in establishing ‘voluntary’ mechanisms for informing the public when AI is being used for significant or sensitive decisions in relation to consumers: “The newly-formed AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms. In the meantime, the decision to inform the public of how and when AI is being used to make decisions about them will be left to individual businesses to decide on whether and in what way to inform consumers of AI’s deployment.”
Balance AI transparency concerns with usefulness, says government
Much of the government’s approach is also situated within wider discussions surrounding data transparency and access. The Office for AI, the AI Council, and the soon-to-be-founded Centre for Data Ethics and Innovation will collaborate to create data trusts to facilitate the ethical sharing of data between organisations. These will be offset through provisions for the representation of people whose data is stored, such as through personal data representatives or regular public consultation. These data trusts will enable British SMEs to access large, high-quality datasets from the public sector, such as from the NHS, in order to remain competitive with the large US-based tech companies.
The response also highlights the importance of explainable AI. “We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society. Whether this takes the form of technical transparency, explainability, or indeed both, will depend on the context and the stakes involved,” the statement says. They point to safety-critical scenarios where technical transparency is imperative, calling on regulators in those domains to mandate the use of more transparent forms of AI – ‘even at the potential expense of power and accuracy’: “We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.”
However, the government add the caveat that achieving full technical transparency is difficult – and may even be impossible and undesirable. Explainability considerations, they argue, must be balanced against the positive impacts that AI can bring. “An overemphasis on transparency may be both a deterrent and is in some cases, such as deep learning, prohibitively difficult.”
The government’s full response dives further into data monopolies, social bias, investment in AI development in the UK, and much more. To read on, click here.