A New York lawyer whose legal brief was filled with cases made up by ChatGPT is facing sanctions.

Deborah Yao, Editor

June 9, 2023

2 Min Read
Computer screen displaying text including ChatGPT
Getty Images

At a Glance

  • Personal injury attorney Steven Schwartz is facing sanctions after he used ChatGPT for a legal brief.
  • ChatGPT made up six past cases, which Schwartz used in his legal brief that he filed with a New York district court.
  • Judge P. Kevin Castel was not amused.

A New York lawyer who used ChatGPT for a legal brief he filed in court is facing sanctions after some of the cases cited were found to be fake.

This week, personal injury attorney Steven Schwartz asked for leniency from Judge P. Kevin Castel of the U.S. District Court for the Southern District of New York, according to Bloomberg Law.

“There were many things I should have done to assure the veracity of these cases,” Schwartz said. “I failed miserably at that.”

The judge grilled him and his partner, Peter LoDuca, about the fake cases since both of their names were on the brief. They come from the law firm of Levidow, Levidow & Oberman in Manhattan.

Judge Castel did not provide a timeframe for his ruling on any sanctions.

Stay updated. Subscribe to the AI Business newsletter

Schwartz represents a traveler who is suing Avianca Airlines over a 2019 trip from El Salvador to New York, claiming that a metal serving cart hit him in the left knee and caused “severe personal injuries.”

Avianca sought to dismiss the lawsuit because the statute of limitations had expired. In response, Schwartz and his team filed a 10-page brief that cited six non-existent cases, such as Martinez v. Delta Air Lines.

But Avianca’s lawyers could not find the cases that were cited. It turns out that Schwartz used ChatGPT for research and the AI model just made them up. Schwartz said that he never thought ChatGPT could do so.

Related:AI News Roundup: U.S. Judge Bans ChatGPT Use in Legal Briefs

However, the judge said that even after Schwartz and LoDuca learned about the fake citations, Schwartz continued to use ChatGPT.

“I doubt we would be here today if the narrative had ended there,” Judge Castel said.

The judge walked Schwartz through each of the cases and asked whether he thought to double-check the facts on legal research databases, books, law libraries or even Google. Schwartz said “no” each time. “I continued to be duped by ChatGPT,” Schwartz said. “It’s embarrassing.”

As a result of this case, a Texas judge now requires lawyers to certify that they did not use generative AI, or if they did, a human verified the accuracy of their filings.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like