Two attorneys from New York were penalized by a Manhattan court for attempting to present a case using legal briefs generated by a hallucinatory ChatGPT language model. The lawyers, Steven Schwartz and Peter LoDuca from the personal injury law firm Levidow, Levidow & Oberman, were fined $5,000 by the presiding judge, who found their actions to be in bad faith, including acts of conscious avoidance and providing false and misleading statements to the court.

In a series of unexpected events, it was revealed that the lawyers had used ChatGPT to aid their research in a specific case against Avianca, a Colombian airline. Last month, Schwartz confessed to utilizing the AI chatbot but claimed he was unaware of the false information it provided. The issue came to light when Avianca’s attorneys attempted to locate the cases referenced by the duo.

Similar News:- 10k+ ChatGPT Hacked Accounts are being sold on Dark Web

Despite questioning from the court and the airline regarding the existence of the cases, the lawyers continued to assert the validity of the fabricated opinions. Interestingly, ChatGPT coincidentally generated the names of ‘real’ judges as the purported authors of these false cases. As part of the judge’s ruling, the lawyers are required to inform the ‘real’ judges mentioned in the six fake citations about their AI-related error.

The judge acknowledged that there was nothing inherently improper about attorneys using ChatGPT for assistance but emphasized that ethical guidelines impose a gatekeeping role on attorneys to ensure the accuracy of their filings.

Following the court’s decision, Levidow, Levidow & Oberman released a statement from the lawyers, acknowledging their good faith mistake in failing to suspect that technology could fabricate cases. The firm respectfully disagreed with the court’s ruling and stated that they are reviewing it.

Separately, the personal injury case against the Colombian airline at the center of the controversy was dismissed by the judge in another order.

The incident involving ChatGPT and misinformation has raised concerns about OpenAI, the creator of the language model, regarding hallucinations, misinformation, data processing, and privacy practices. This is not the first instance of ChatGPT providing false information in a legal context. OpenAI is currently facing a lawsuit for alleged malicious slander, where ChatGPT falsely identified a Georgia resident as a suspect in a criminal case unrelated to him, involving a pro-gun foundation and embezzlement of funds in Washington State. Experts describe ChatGPT’s hallucinations as the generation of seemingly realistic and plausible information based on fictional “facts.”

Leave A Reply

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.