AI Legal Tools Frequently Hallucinate Answers, Study Finds
Stanford study reveals legal AI tools produce false information in one out of every six queries
![AI ethics or AI Law concept art with a legal scale and technology AI ethics or AI Law concept art with a legal scale and technology](https://eu-images.contentstack.com/v3/assets/blt6b0f74e5591baa03/blt1a68198bdc765490/66573268451b1feb2f32213b/GettyImages-1463288323.jpg?width=850&auto=webp&quality=95&format=jpg&disable=upscale)
A new study from researchers at Stanford University found that AI-powered legal tools generate hallucinations in one in every six queries.
Stanford’s RegLab and Institute for Human-Centered Artificial Intelligence (HAI) analyzed the effectiveness of AI-powered legal tools from LexisNexis and Thomson Reuters.
The researchers found that despite being advertised as bespoke legal AI tools, they produced false or incorrect information “an alarming amount of the time.”
“The core promise of legal AI is that it can streamline the time-consuming process of identifying relevant legal sources,” the report read. “If a tool provides sources that seem authoritative but are in reality irrelevant or contradictory, users could be misled. They may place undue trust in the tool's output, potentially leading to erroneous legal judgments and conclusions.”
The AI tools were asked more than 200 questions covering general research, jurisdiction-specific questions and questions that mimic a user having a mistaken understanding of the law.
It also faced queries in simple facts that require no legal interpretation.
Researchers found that more than 17% of the time—one in every six queries — the AI legal tools from both LexisNexis and Thomson Reuters generated hallucinated responses, either by getting answers correct but citing incorrect sources or creating entirely inaccurate responses.
About the Author(s)
You May Also Like