Lawyers Blame ChatGPT For Courtroom Error

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 6

Lawyers Blame

ChatGPT for
Courtroom
Error
Vocabulary
fictitious
The TV series "Game of Thrones" is set in a fictitious land called Westeros.
precedent
Before COVID-19, governments had very little precedent for how to deal with a pandemic.
fabricate
The officer was fired for fabricating evidence in several major cases.
humiliate
Amy felt humiliated when her date didn't show up at the restaurant.
remorseful
The prime minister seemed genuinely remorseful during his public apology following the scandal.
bad faith
My client acted out of ignorance, not bad faith.
Article
Lawyers Blame ChatGPT for Courtroom Error

Two lawyers responding to an angry judge in a New York court blamed ChatGPT for tricking them into
including fictitious legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit
against an airline that included references to past court cases that Schwartz thought were real, but
were actually invented by the artificial intelligence-powered chatbot.

Schwartz explained that he used the program as he looked for legal precedents supporting a client's
case against the Colombian airline Avianca for an injury that happened on a 2019 flight.

The chatbot, which has fascinated the world with its ability to answer prompts from users, suggested
several cases that Schwartz hadn't been able to find through usual methods used at his law firm.
The problem was, several of those cases weren't real or involved airlines that didn't exist.

"I did not comprehend that ChatGPT could fabricate cases," Schwartz said. "I would like to sincerely
apologize," he added.
He said that he had suffered personally and professionally as a result and felt "embarrassed, humiliated
and extremely remorseful.“

He said that he and the firm where he worked had put safeguards in place to ensure nothing similar
happens again.

Ronald Minkoff, an attorney for the law firm, told the judge that the submission was caused by
"carelessness, not bad faith" and should not result in punishment.

"Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought
he was dealing with a standard search engine," Minkoff said. "What he was doing was playing with live
ammo.“

Daniel Shin, an adjunct professor at the Center for Legal and Court Technology at William & Mary Law
School, said that this is the first time that something like this has happened. He added that this case
"highlights the dangers of using promising AI technologies without knowing the risks.“

The judge said he'll rule on punishments at a later date.

You might also like