– Lawyers in New York City used ChatGPT to generate legal documents, despite knowing its tendency to fabricate information
– The court discovered that six of the cited legal cases were imaginary
– The lawyers lied to the court and offered contradictory explanations when questioned about the made-up citations
– Judge P. Kevin Castel accused the lawyers and their firm, Levidow, Levidow & Oberman, P.C., of acting in bad faith and lying to cover up their mistakes
– The lawyers abandoned their responsibilities and continued to stand by the fake opinions even after their existence was questioned by judicial orders.
By now you’ve probably heard that ChatGPT has a penchant for making up lies and stating them as fact, or what experts refer to as an AI “hallucination.” That didn’t stop two lawyers in New York City from having ChatGPT spit out some legal documents. Unsurprisingly, the AI fabricated some quotes and citations out of thin air, but the lawyers submitted the documents anyway, apparently without doing any diligence to check the facts.
The shenanigans came to light when the court noticed six of the legal cases used as citations were imaginary. The judge wrote that lawyers Peter LoDuca and Steven A. Schwartz then made matters worse for themselves by lying to the court. When the court questioned their made-up case law, the judge said Schwartz offered “shifting and contradictory explanations” and LoDuca pretended to be on vacation in a bid for more time.
Federal Judge P. Kevin Castel wrote that the lawyers and their firm, Levidow, Levidow & Oberman, P.C., acted in bad faith and lied to the court to cover up their mistakes. The respondents “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question,” Judge Castel wrote in the decision.
AI Eclipse TLDR:
Two lawyers in New York City used OpenAI’s ChatGPT to generate legal documents, despite knowing that the AI often produces false information. The court later discovered that six of the legal cases cited in the documents were entirely imaginary. The lawyers, Peter LoDuca and Steven A. Schwartz, then compounded the issue by lying to the court when their fabricated case law was questioned. Federal Judge P. Kevin Castel found that the lawyers and their firm, Levidow, Levidow & Oberman, P.C., acted in bad faith and attempted to cover up their mistakes. The lawyers submitted non-existent judicial opinions with fake quotes and citations generated by ChatGPT and continued to defend the fake opinions even after their existence was challenged by the court.