A New York lawyer has apologized after he used ChatGBT to prepare his file for a civil trial he was arguing in, after it emerged that artificial intelligence had provided him with completely fabricated elements.
“I had no idea that ChatGBT was able to fabricate entire statements in judicial cases or opinions, in such a way that it appeared to be authentic,” attorney Stephen Schwartz wrote in a document that he recently attached to the judicial file.
In this case (in its actual form), the attorney pleaded in federal civil court in Manhattan on behalf of a client who was suing the Colombian airline Avianca. Roberto Matta asked for compensation for what he says was a knee injury he sustained when a metal plate fell during a flight in August 2019 from El Salvador to New York.
The aviation police asked the court to return the file, but the passenger’s lawyer submitted a memorandum containing a few case law precedents supporting his position, and especially including cases mentioned in the file that had previously been filed against airlines from several countries, including Iran, Egypt and China.
However, the problem arose after none of the representatives of the opposing party or the judge examining the case found any trace of these cases when examining the case law.
The aforementioned judge noted in writing that “six of the cases presented appear to be related to erroneous judicial decisions with false statements.” Stephen Schwartz had to admit that ChatGPT, the AI tool developed by OpenAI, fabricated all of these elements.
In the apology he wrote after the judge summoned him to impose possible penalties against him, the lawyer confirmed that he did not seek to deceive the court.
“When I did the forensic research on this case, I thought ChatGBT was a reliable search engine. But now I realize that is not the case,” he said.
ChatGBT has achieved huge success in recent months thanks to its remarkable ability to generate content that is very close to what humans can do, such as poems or articles.
But it raises a torrent of criticism, with widespread fears that it could be used for disinformation, manipulate election results, eliminate large numbers of jobs, and even threaten humanity with its very existence.