Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Steve Mollman

A lawyer fired after citing ChatGPT-generated fake cases is sticking with AI tools: ‘There’s no point in being a naysayer’

(Credit: getty)

Artificial intelligence will bring changes to many professions, including law. But it’s also claiming victims who trust too much in its capabilities.

Among them is Zachariah Crabill, who was an overwhelmed rookie lawyer at a law firm in Colorado Springs when he gave in to the temptation of using ChatGPT in May.

The AI chatbot helped him write a motion in minutes, saving him hours of work, as local radio station KRDO reported in June. But after he filed the document with a Colorado court, he realized that something was amiss: Several lawsuit citations generated by ChatGPT were made up.

OpenAI's ChatGPT is known to be confidently wrong, and in this case it simply created cases out of thin air that sounded convincing. Crabill did not check to make sure the cases were real before submitting his work.

Crabill admitted his mistake to the judge, who reported him to statewide office, and in July the young attorney was fired from his job at Baker Law Group. 

In his statement to the court admitting his mistake, Crabill wrote, “I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting.” 

Crabill isn’t the only lawyer to trust ChatGPT too much. In June, two lawyers were scolded and fined $5,000 by a federal judge in New York for submitting a legal brief that also cited nonexistent cases. 

In sanctions against Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, the judge wrote: “Technological advances are commonplace, and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz had earlier told the judge.

But Crabill, for his part, isn’t giving up on AI tools, despite the traumatic experience. 

“I still use ChatGPT in my day-to-day, much like most people use Google on the job,” he told Business Insider. Indeed he has since started a company that provides legal services via AI.

In a Washington Post piece published on Thursday, Crabill said he would likely use AI tools designed specifically for lawyers to aid in his writing and research.

He added, “There’s no point in being a naysayer or being against something that is invariably going to become the way of the future.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.