AI Helps Judges Decide Court Cases in China

Judge who disagrees with AI has to explain why – in writing.

Ben Wodecki, Jr. Editor

July 18, 2022

2 Min Read
Getty images

China claims to have used AI systems to reduce the workloads of judges by more than a third.

The ML-powered ‘smart court SoS’ screens court cases for references and provides judges with recommendations on both laws and regulations. It also drafts documents and can alter errors in verdicts.

The success of the system was documented in Strategic Study of CAE, an official journal run by the Chinese Academy of Engineering, according to the South China Morning Post.

Xu Jianfeng, who heads the information center of China’s Supreme People’s Court in Beijing, said in the journal that the tool “connects to the desk of every working judge across the country.”

The AI system purportedly saved the Chinese legal system $45 billion (300 billion yuan) between 2019 and 2021.

China has been attempting to add automation to its legal system since 2016. Smart court SoS initially started as a database but has become much more. Today, if a judge disagrees with the findings of the system, they are required to provide a written explanation.

China is not the first nation to have incorporated AI technologies into its legal system.

The U.K., the Netherlands and Latvia have implemented automated solutions in online dispute resolutions.

In the U.S., various police departments and law enforcement agencies have been using predictive algorithms and facial recognition systems, although the use of these technologies has been scrutinized by several rights groups over privacy concerns.

Government by AI?

The news comes after DeepMind researchers recently trained an AI system to come up with legislation that could win a majoritarian election.

A study by the Google-owned company saw 4,000 people vote on their preferred methods for distributing public wealth, which the AI system then scoured through to develop a policy based on the results.

Liberal egalitarianism won, with a mechanism that redressed initial wealth imbalance and sanctioned free riders coming out on top.

Despite the success of its work, DeepMind did raise the notion of potentially disenfranchising the few by creating a consensus among the many.

The company also cautioned against the government by AI, suggesting the study was more fit for "designing potentially beneficial mechanisms” and not a “recipe for deploying AI in the public sphere.”

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like