Lawyer Does Big Oopsie, Cites Made-up Cases Invented By ChatGPT While Defending His Client
Just two weeks after a college professor nearly failed his entire class on suspicion of using OpenAI's ChatGPT by using the service as a makeshift plagiarism checker, another professional has gotten himself into hot water by attempting to make the AI language model GPT do something it can't: write legal arguments.
According to reports, New York lawyer Steven Schwartz (not that one) was recently representing his client Roberto Mata in his lawsuit against the airline Avianca. Mata alleged he was injured when one of the airline's carts struck his knee.
Avianca attempted to get the case thrown out, and Schwartz drafted a 10-page argument about why the case should go to court, citing numerous cases as legal precedents, such as Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.
However, none of these cases are real and were all reportedly invented by ChatGPT, which Schwartz had apparently used to write his argument for him.
Last week, Schwartz told a New York court that he "greatly regrets" using ChatGPT and stated he did not know the program could produce unreliable information. Schwartz now faces sanctions, and a hearing on the matter is scheduled for June 8th (it is unclear how Mata's lawsuit is going).
In his defense, Schwartz stated he grilled ChatGPT to make sure that the cases it spit out were real. He asked, "Is varghese a real case," to which the program said it is.
Like a good lawyer, Schwartz pressed further, asking, "What is your source?" ChatGPT then provided a legal citation (that it again had made up). He also asked, "Are the other cases you provided fake?," to which ChatGPT responded, "No, the other cases I provided are real and can be found in reputable legal databases."
The controversy echoes the near-flunking of senior students by a professor who suspected his class was using ChatGPT to write their essays earlier this month. In that instance, Dr. Jared Mumm fed his students' papers into ChatGPT to ask if it wrote them, to which ChatGPT responded that it is possible it had. In response, students and Redditors used the same method to see if Mumm's own thesis was "AI-generated," and ChatGPT said "yes."
Share Pin
Nukegirl
The notion of a grown man who "did not know the program could produce unreliable information" before getting it to do an important job is utterly baffling to me.
How does one get to know anything about AI chatbots without getting acquainted with the fact that they hardly know what they're talking about? I'd imagine that the flaws in publicly available AI technology would be the first or second thing one learns about it. For instance, whenever AI art is discussed, it's almost inevitable that its poor comprehension of hands gets brought up.
Even as far back as with GPT-2, <AI Dungeon 2 had a lot of hoops that players would need to jump through to make sure the AI wouldn't make a mistake like forgetting a previously established fact. It didn't take long for me to learn that some supposed media expert bots on Character.ai very obviously fake their understanding of facts, especially when numbers are brought up. Perhaps that would require actually trying out the technology, though – but if you have never, ever tried it before, why would you give an AI such a crucial first task?
Revic
I've been enjoying the "businesses get overhyped and start cramming their eggs into the AI basket before it's done being woven" phase. I just hope they hit these walls loudly and often enough to maybe save some people from being fired and replaced by a system that's not quite ready to compete with a search engine for many tasks.
Evilthing
Yea. It's not that developed. It's good to find a general ideas from but always double-check the facts and factoids.
lecorbak
chatGPT be like "trust me bro", and the lawyer actually trusted it.
ObadiahtheSlim
People seem to forget that it's just a sophisticated chatbot.
VeteranAdventureHobo
People who treat chat gpt like a combination search engine writing tool are genuinely, and I'm not exaggerating, some of the most gullible and dumb people I've seen.
It creates absolute bullshit and people will just be like "IDK, sounds about right" and use it for papers or as an alternative to google
Andrina Westerdale
I think this is why Google's Bard hasn't been pushed as hard. They want OpenAI(MS) to mess up badly and then Google steps up as the "sensible" version