AI helps judges decide court cases in China

Judge who disagrees with AI has to explain why – in writing.

Ben Wodecki

July 18, 2022

2 Min Read

Judge who disagrees with the AI has to explain why – in writing.

China claims to have used AI systems to reduce the workloads of judges by more than a third.

The ML-powered ‘smart court SoS’ screens court cases for references and provides judges with recommendations on both laws and regulations. It also drafts documents and can alter errors in verdicts.

The success of the system was documented in Strategic Study of CAE, an official journal run by the Chinese Academy of Engineering, according to the South China Morning Post.

Xu Jianfeng, who heads the information center of China’s Supreme People’s Court in Beijing, said in the journal that the tool “connects to the desk of every working judge across the country.”

The AI system purportedly saved the Chinese legal system $45 billion (300 billion yuan) between 2019 and 2021.

China has been attempting to add automation to its legal system since 2016. Smart court SoS initially started as a database but has become much more. Today, if a judge disagrees with the findings of the system, they are required to provide a written explanation.

China is not the first nation to have incorporated AI technologies into its legal system.

The U.K., the Netherlands and Latvia have implemented automated solutions in online dispute resolutions.

In the U.S., various police departments and law enforcement agencies have been using predictive algorithms and facial recognition systems, although the use of these technologies has been scrutinized by several rights groups over privacy concerns.

Government by AI?

The news comes after DeepMind researchers recently trained an AI system to come up with legislation that could win a majoritarian election.

A study by the Google-owned company saw 4,000 people vote on their preferred methods for distributing public wealth, which the AI system then scoured through to develop a policy based on the results.

Liberal egalitarianism won, with a mechanism that redressed initial wealth imbalance and sanctioned free riders coming out on top.

Despite the success of its work, DeepMind did raise the notion of potentially disenfranchising the few by creating a consensus among the many.

The company also cautioned against the government by AI, suggesting the study was more fit for "designing potentially beneficial mechanisms” and not a “recipe for deploying AI in the public sphere.”

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.