Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The U.S. Department of Justice's new initiative, Justice AI, aims to modernize crime-fighting while adhering to ethical standards.
The U.S. Department of Justice has launched Justice AI: a collaborative project to collect information on how AI can accelerate enforcement efforts.
Justice AI was unveiled by Deputy Attorney General Lisa Monaco during a speech at the University of Oxford. Over the next six months, this initiative will convene experts from civil society, academia, science and industry to provide insights on how AI will affect the DOJ’s efforts.
The findings of Justice AI will form the basis of a report for President Joe Biden about AI and the criminal justice system. The endeavor's ultimate goal is to “ensure we accelerate AI’s potential for good while guarding against its risks," Monaco said.
“Our work at the Department of Justice is to make sure that whatever comes now or next adheres to the law and is consistent with our values,” she added.
The DOJ is already making use of AI in its enforcement efforts, with Monaco citing use cases that include classifying and tracing the source of drugs and sifting through tips submitted to the FBI by the public. The FBI is the main investigative arm of the DOJ.
The DOJ is also using AI solutions to help comb through huge volumes of evidence including high-profile cases like the Jan. 6 Capitol riot in Washington.
Monaco said AI has the potential to be “indispensable” to help identify and deter criminals but warned it could help bad actors.
“It can arm nation-states with tools to pursue digital authoritarianism, accelerating the spread of disinformation and repression. And we’ve already seen that AI can lower the barriers to entry for criminals and embolden our adversaries.”
The Justice AI project is part of the DOJ’s wider efforts to embrace AI. This week, it appointed its first chief AI officer. Jonathan Mayer is an assistant professor at Princeton University’s Department of Computer Science and School of Public and International Affairs. He holds a doctorate in computer science from Stanford and also graduated from the Stanford Law School.
Mayer will advise the attorney general and Justice Department leadership on matters related to AI and cybersecurity. He previously advised Vice President Kamala Harris on tech when she was a senator.
Mayer’s appointment comes a year after the DOJ launched a Disruptive Technology Strike Force to safeguard advanced tech from being unlawfully acquired by foreign adversaries.
One year on from its formation, the Strike Force has charged 14 cases involving alleged sanctions and export control violations and unlawful transfers of sensitive information and military-grade technology to Russia, China or Iran. Cases include those where individuals attempted to procure semiconductor components for the Russian military and source code from iPhone maker Apple for a China-based company.
To neutralize adversaries, Monaco said authorities “need to zero in on AI” to ensure the technology is not used to threaten U.S. national security.
You May Also Like