Man Sues OpenAI After ChatGPT Allegedly Falsely Accuses Him Of Embezzlement
By Alexa Heah, 08 Jun 2023
While artificial intelligence chatbots have made life easier by helping humans generate information at a rapid pace, the technology isn’t all perfect. Recently, one such mistake led radio host Mark Walters to sue ChatGPT’s parent company, OpenAI.
When a journalist for a gun website tasked the chatbot with coming up with a summary of The Second Amendment Foundation v Robert Ferguson, the tool came back with an answer that involved Walters in the legal melee, claiming he was accused of embezzling money.
However, none of that is true. Walters was not involved in the aforementioned lawsuit and was likely the victim of what researchers dub an AI “hallucination”—which occurs when generators spit out “facts” that are simply false.
As such, Walters has now filed the first-ever libel lawsuit against the popular chat, citing damage to his reputation. Could this filing open Pandora’s box to more and more individuals taking AI companies to court over fabricated information?
“Every statement of fact in the summary pertaining to Walters is false,” per the suit, which was filed in Gwinnett County Superior Court. The complaint alleged that OpenAI was negligent in publishing “libelous material regarding Walters” in the passage it provided to the journalist.
According to Futurism, after taking a deeper look into the matter, the hallucination by ChatGPT appears even more confusing. It’s unclear why the chatbot identified Walters as the Foundation’s Chief Financial Officer and Treasurer when his name was never mentioned in the initial prompt.
Worse still, when the journalist asked the generator to point to the exact passage within the lawsuit that mentioned Walters, it doubled down on its allegations. Here’s the twist: the text “cited” by ChatGPT does not actually appear in the actual filing.
Such falsehoods, while disturbing, shouldn’t come as a complete surprise. After all, OpenAI Founder Sam Altman has admitted the company needed to address the problem of hallucinations, with a recent blog post promising new models that would cut down on fabrications.
“In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI,” it said.
[via Gizmodo and Futurism, cover image via Ryan Deberardinis | Dreamstime.com]