AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Bringing Accountability, Responsibility, and Transparency to AI

by Ciarán Daly
Article ImageBy Virginia Dignum

As intelligent systems increasingly make decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical and societal implications of their actions. As intelligent, autonomous, and efficient as they will ever be, AI systems are artefacts, tools that support us in daily tasks, in improving our lives, and in increasing our wellbeing. But we are the ones responsible for them. We are the ones determining the questions that AI systems can answer—and answer they will.

‘We’ and ‘our’ have two meanings: to refer to the moral, societal, and legal responsibility of those that develop, manufacture, and use AI systems; and to indicate that AI will affect all humankind. “AI for Good” demands that we truly consider all humankind when considering whose lives and wellbeing AI can improve. It is now the time to decide. Are we following the money, or following humankind’s best interests? Are we basing AI developments on shareholder value or on human rights and human values?

Responsible AI rests on three main pillars: Accountability, Responsibility, and Transparency. Together, these considerations form the A.R.T. principles for AI. Responsibility is core to AI development. Responsibility refers to the role of people as they develop, manufacture, sell, and use AI systems, but also to the capability of AI systems to answer for their decisions and identify errors or unexpected results. As the chain of responsibility grows, means are needed to link the AI system’s decisions to the fair use of data and to the actions of stakeholders involved in the system’s decision. Means are needed to link moral, societal, and legal values to the technological developments in AI. Responsible AI is more than the ticking of some ethical ‘boxes’ or the development of some add-on features in AI systems. Rather, responsibility is fundamental to intelligence and to action in a social context. Education also plays an important role here, both in ensuring that knowledge of the potential AI is widespread, as well as in making people aware that they can participate in shaping the societal development.

A second pillar, Accountability, is the capability of explaining and answering for one’s own actions, and is associated with liability. Who is liable if an autonomous car harms a pedestrian? The builder of the hardware (sensors, actuators)? The builder of the software that enables the car to autonomously decide on a path? The authorities that allow the car on the road? The owner that personalised the car’s decision-making system to meet their preferences? The car itself is not accountable, it is an artefact, but it represents all of these stakeholders. Models and algorithms are needed that will enable AI systems to reason and justify decisions based on principles of accountability. Current deep-learning algorithms are unable to link decisions to inputs, and therefore cannot explain their acts in meaningful ways. Ensuring accountability in AI systems requires both the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and classifying these in terms of social values and norms).

The third pillar, Transparency, refers to the need to describe, inspect, and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environment, and to the governance of the data used or created. Current AI algorithms are basically black boxes. Methods are needed to inspect algorithms and their results. Moreover, transparent data governance mechanisms are needed to ensure that data used to train algorithms and guide decision-making is collected, created, and managed in a fair and clear manner, taking care of minimizing bias and enforcing privacy and security. New and more ambitious forms of governance is one of the most pressing needs in order to ensure that inevitable AI advances will serve societal good.

The development of AI algorithms has so far been led by the goal of improving performance, leading to efficient but very opaque algorithms. Putting human values at the core of AI systems calls for a mind shift of researchers and developers toward the goal of ensuring Accountability, Responsibility and Transparency rather than focusing on performance alone. I am sure that this shift will lead to novel and exciting techniques and applications, and prove to be the way forward in AI research. For more information about how AI can help solve humanity’s greatest challenges, go to ai.xprize.org/AI-For-Good.

Virginia Dignum is Associate Professor on Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. She holds a PhD from the Utrecht University. Prior to her PhD, she worked for more than 12 years in business consultancy and system development in the areas of artificial intelligence and knowledge management. Dignum is Executive Director of the Delft Design for Values Institute, member of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems and director of the new MSC program on AI and Robotics at TU Delft.

 

Practitioner Portal - for AI practitioners

Story

MLOps startup Verta gets $10m in funding, launches first product

9/1/2020

The company plans to commercialize open source ModelDB project, developed by CEO Manasi Vartak

Story

AI and analytics services: Capabilities and costs

8/27/2020

Which skills do you need in your team? What are the costs for running the service? How can you optimize them? These are three key questions when setting-up and running an AI and analytics service.

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up