Bringing Accountability, Responsibility, and Transparency to AI

Bringing Accountability, Responsibility, and Transparency to AI

Ciarán Daly

November 14, 2017

4 Min Read
Picture of 5 sets of hands in a circle, in different skin tones overlaying each other

By Virginia Dignum

As intelligent systems increasingly make decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical and societal implications of their actions. As intelligent, autonomous, and efficient as they will ever be, AI systems are artefacts, tools that support us in daily tasks, in improving our lives, and in increasing our wellbeing. But we are the ones responsible for them. We are the ones determining the questions that AI systems can answer—and answer they will.

‘We’ and ‘our’ have two meanings: to refer to the moral, societal, and legal responsibility of those that develop, manufacture, and use AI systems; and to indicate that AI will affect all humankind. “AI for Good” demands that we truly consider all humankind when considering whose lives and wellbeing AI can improve. It is now the time to decide. Are we following the money, or following humankind’s best interests? Are we basing AI developments on shareholder value or on human rights and human values?

Responsible AI rests on three main pillars: Accountability, Responsibility, and Transparency. Together, these considerations form the A.R.T. principles for AI. Responsibility is core to AI development. Responsibility refers to the role of people as they develop, manufacture, sell, and use AI systems, but also to the capability of AI systems to answer for their decisions and identify errors or unexpected results. As the chain of responsibility grows, means are needed to link the AI system’s decisions to the fair use of data and to the actions of stakeholders involved in the system’s decision. Means are needed to link moral, societal, and legal values to the technological developments in AI. Responsible AI is more than the ticking of some ethical ‘boxes’ or the development of some add-on features in AI systems. Rather, responsibility is fundamental to intelligence and to action in a social context. Education also plays an important role here, both in ensuring that knowledge of the potential AI is widespread, as well as in making people aware that they can participate in shaping the societal development.

A second pillar, Accountability, is the capability of explaining and answering for one’s own actions, and is associated with liability. Who is liable if an autonomous car harms a pedestrian? The builder of the hardware (sensors, actuators)? The builder of the software that enables the car to autonomously decide on a path? The authorities that allow the car on the road? The owner that personalised the car’s decision-making system to meet their preferences? The car itself is not accountable, it is an artefact, but it represents all of these stakeholders. Models and algorithms are needed that will enable AI systems to reason and justify decisions based on principles of accountability. Current deep-learning algorithms are unable to link decisions to inputs, and therefore cannot explain their acts in meaningful ways. Ensuring accountability in AI systems requires both the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and classifying these in terms of social values and norms).

The third pillar, Transparency, refers to the need to describe, inspect, and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environment, and to the governance of the data used or created. Current AI algorithms are basically black boxes. Methods are needed to inspect algorithms and their results. Moreover, transparent data governance mechanisms are needed to ensure that data used to train algorithms and guide decision-making is collected, created, and managed in a fair and clear manner, taking care of minimizing bias and enforcing privacy and security. New and more ambitious forms of governance is one of the most pressing needs in order to ensure that inevitable AI advances will serve societal good.

The development of AI algorithms has so far been led by the goal of improving performance, leading to efficient but very opaque algorithms. Putting human values at the core of AI systems calls for a mind shift of researchers and developers toward the goal of ensuring Accountability, Responsibility and Transparency rather than focusing on performance alone. I am sure that this shift will lead to novel and exciting techniques and applications, and prove to be the way forward in AI research. For more information about how AI can help solve humanity’s greatest challenges, go to ai.xprize.org/AI-For-Good.

Virginia Dignum is Associate Professor on Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. She holds a PhD from the Utrecht University. Prior to her PhD, she worked for more than 12 years in business consultancy and system development in the areas of artificial intelligence and knowledge management. Dignum is Executive Director of the Delft Design for Values Institute, member of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems and director of the new MSC program on AI and Robotics at TU Delft.

 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like