DeepMind: Can AI settle the debate over redistribution of wealth?

Researchers, however, caution against government by AI.

Ben Wodecki

July 5, 2022

3 Min Read

Researchers, however, caution against government by AI.

Researchers from DeepMind recently pondered whether AI can design “fair and prosperous cities” through public policies that will be supported by the majority of people.

It launched an experiment to find out.

DeepMind decided to tackle the topic of redistribution of wealth, which have long been a political hot button issue pitting free-market supporters against socialists, according to a paper published this week in Nature.

The Google-owned company embarked on the task by training a system it calls ‘Democratic AI’ on responses from 4,000 people as well as computer simulations from an online game.

“We deploy Democratic AI to address a question that has defined the major axes of political agreement and division in modern times: when people act collectively to generate wealth, how should the proceeds be distributed?” wrote the authors, who were mostly from DeepMind but also Oxford University, University College London and University of Exeter.

They gauged the outcome of three economic policies: egalitarianism, in which everyone received equal payouts; libertarianism, in which people who contributed more also stood to receive more; and liberal egalitarianism, which is somewhere in between.

Respondents were asked to vote on their preferred methods for distributing public wealth, which the AI system then scoured through to develop a policy based on the results.

The result: Liberal egalitarianism won out.

Democratic AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. “By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation,” according to the paper.

Minds from the makers of AlphaFold and Go were seeking to research the potential AI technologies that can benefit humans – specifically, whether a deep reinforcement learning system could be used to design an economic mechanism that is “measurably preferred by groups of incentivized humans.”

Instead of feeding the system human-centric values prior to the study, DeepMind trained the AI to maximize a democratic objective by designing a policy that humans would likely vote for in a majoritarian election.

Results from the research suggest that an AI system can be trained to satisfy a democratic objective and achieve consensus among a group of decision-makers.

Tyranny of the many?

Despite its success, DeepMind’s paper does raise the notion of potentially disenfranchising the few by creating a consensus among the many.

“This is particularly pertinent given the pressing concern that AI might be deployed in a way that exacerbates existing patterns of bias, discrimination or unfairness in society,” according to the paper. “We acknowledge that if deployed as a general method, without further innovation, there does exist the possibility that it could be used in a way that favors the preferences of a majority over a minority group.”

One potential solution to avoid this, according to DeepMind’s researchers, would be to augment the cost function in ways that redress this issue, much as protections for minorities are often enshrined in law.

The paper also touches on whether humans will trust AI systems to design such mechanisms in place of humans, but it stops short of implying support for an ‘AI government.’

“We see Democratic AI as a research methodology for designing potentially beneficial mechanisms, not a recipe for deploying AI in the public sphere,” the paper said. “We hope that further development of the method will furnish tools helpful for addressing real-world problems in a truly human-aligned fashion.”

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.