Law

While the focus of Academ-AI is the undeclared use of artificial intelligence in the academic literature, the manifestation of the same issue in the legal world provides an interesting and instructive parallel. The following is a non-exhaustive list of legal cases in which the use of artificial intelligence is suspected or confirmed. Relevant passages are quoted, and the full text of each is linked.

Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023)

Peter LoDuca, Steven A. Schwartz and the law firm of Levidow, Levidow & Oberman P.C. (the “Levidow Firm”) (collectively, “Respondents”) abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.

People v. Zachariah C. Crabill, 23PDJ067 (Colo. Nov. 22, 2023)

Crabill, who had never drafted such a motion before working on his client’s matter, cited case law that he found through the artificial intelligence platform, ChatGPT. Crabill did not read the cases he found through ChatGPT or otherwise attempt to verify that the citations were accurate.

Ex parte Lee, 673 S.W.3d 755, 757 n.2 (Tex. App. 2023)

Appellant only cites three published cases in support of his argument […] However, none of those cases exist.

[…]

[I]t appears that at least the “Argument” portion of the brief may have been prepared by artificial intelligence (AI).

Wadsworth v. Walmart, 2:23-cv-00118-KHR (D. Wyo. Feb. 6, 2025)

Plaintiffs cited nine total cases […] The problem with these cases is that none exist, except United States v. Caraway, 534 F.3d 1290 (10th Cir. 2008). The cases are not identifiable by their Westlaw cite, and the Court cannot locate the District of Wyoming cases by their case name in its local Electronic Court Filing System. Defendants aver through counsel that “at least some of these mis-cited cases can be found on ChatGPT.” [ECF No. 150] (providing a picture of ChatGPT locating “Meyer v. City of Cheyenne” through the fake Westlaw identifier).

Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024)

We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.

United States v. Hayes, 2:24-cr-0280-DJC, 14 (E.D. Cal. Jan. 17, 2025)

The primary case upon which Mr. Francisco relied […] was “United States v. Harris, 761 F.Supp. 409, 414 (D.D.C. 1991).”

Unfortunately, “United States v. Harris, 761 F.Supp. 409, 414 (D.D.C. 1991)” is not a real case. The citation has all the markings of a hallucinated case created by generative artificial intelligence (AI) tools such as ChatGPT and Google Bard that have been widely discussed by courts grappling with fictitious legal citations and reported by national news outlets.

Al-Hamim v. Star Hearthstone, LLC, No. 24CA0190, 17 (Colo. App. 2024)

In his response to our show cause order, Al-Hamim admitted that he relied on AI “to assist his preparation” of his opening brief, confirmed that the citations were hallucinations, and that he “failed to inspect the brief.”

Dukuray v. Experian Info. Sols., 23 Civ. 9043 (AT) (GS), 26 (S.D.N.Y. Jul. 26, 2024)

[T]he Opposition submitted by Plaintiff includes citations to several nonexistent judicial opinions with false reporter numbers […] Defendants suggest this is the result of Plaintiff’s use of ChatGPT or similar artificial intelligence (“AI”) to draft the Opposition. (Reply at 2-3). In light of recent high-profile cases involving fake citations generated by ChatGPT, there is certainly merit to that suggestion.

Byrd v. The Vill.s of Woodland Springs Homeowners Ass’n, No. 02-23-00078-CV, 10 n.12 (Tex. App. Jul. 25, 2024)

We cannot tell from Byrd’s brief if he used ChatGPT or another artificial intelligence (AI) source to attempt to develop his legal citations.

Wojcik v. Metro. Life Ins. Co., 22-cv-06518, 6 n.2 (N.D. Ill. Mar. 21, 2024)

Plaintiff’s attorney explicitly cited the prompt they inserted into ChatGPT for AI to do their research for them. Not only is the Court appalled at Plaintiff’s attorney’s refusal to do simple research, but such reliance on AI is a disservice to clients who rely on their attorney’s competence and legal abilities.

McGann v. Jagow (In re McGann), Bankr. 20-18118, 2 n.2 (B.A.P. 10th Cir. Jul. 1, 2024)

We note that many of the cases cited by Appellant do not appear to be real cases. Thus, we strongly caution Appellant against using generative artificial intelligence (such as ChatGPT, Harvey AI, or Google Bard) to compose future legal pleadings.

Arajuo v. Wedelstadt, No. 23-C-1190, 2 (E.D. Wis. Jan. 22, 2025)

Federal Rule of Civil Procedure 11 provides that, by presenting a submission to the court, an attorney “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances . . . the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Fed.R.Civ.P. 11(b)(2). “At the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Park v. Kim, 91 F.4th 610, 615 (2d Cir. 2024). To the extent counsel used an artificial intelligence tool (e.g., ChatGPT) that generated fake case citations, this is unacceptable.

Mescall v. Renaissance at Antiquity, Civil Action 3:23-CV-00332-RJC-SCR, n.1 (W.D.N.C. Nov. 13, 2023)

Defendants allege that Plaintiff’s Response appears to have been partially written with the aid of artificial intelligence (“AI”). (Doc. No. 18 at 7, Doc. 20 at 1-2).

Budinski v. Mass., 6:24-CV-06449 EAW, n.1 (W.D.N.Y. Dec. 3, 2024)

While not clear, it appears that Plaintiff may be disputing that contention with reliance on information obtained through “ChatGPT.”

Scott v. Fed. Nat’l Mortg. Ass’n, No. RE-2023-037, 7 (Me. Super. Jun. 14, 2023)

In Mr. Scott’s opposition, he cites and quotes several cases. Although the citations are mostly in correct format, with a case name, reporter volume and page number, and year, the citations do not refer to any case that the Court can locate. Searching legal databases for the text of the quotations returns no results. In other words, the cases and quotations are fabricated.

[…]

The Court agrees that sanctions should be imposed both to penalize Mr. Scott’s noncompliance with Rule 11 and to deter future litigants from blindly relying on filings generated by artificial intelligence.

Kruse v. Karlen, 692 S.W.3d 43, 51 (Mo. Ct. App. 2024)

In his Reply Brief, Appellant apologized for submitting fictitious cases and explained that he hired an online “consultant” purporting to be an attorney licensed in California to prepare the Appellate Brief […] Appellant stated he did not know that the individual would use “artificial intelligence hallucinations” and denied any intention to mislead the Court or waste Respondent’s time researching fictitious precedent.

Transamerica Life Ins. Co. v. Williams, No. CV-24-00379-PHX-ROS, 3 n.3 (D. Ariz. Sep. 6, 2024)

Defendant Williams’ filings are replete with citations to nonexistent caselaw and legal authorities that do not correspond to her claims, suggesting that Defendant Williams may be using AI, such as ChatGPT, to draft her briefs, which is impermissible when fictitious legal authorities are cited.

United States v. Cohen, 724 F. Supp. 3d 251, 254 (S.D.N.Y. 2024)

Cohen had obtained the cases and summaries from Google Bard, which he “did not realize . . . was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not. Instead, [he had] understood it to be a super-charged search engine . . . .” Cohen Decl. ¶ 20.

Iovino v. Michael Stapleton Assocs., Civil Action 5:21-cv-00064, 14-15 (W.D. Va. Jul. 24, 2024)

Here, Iovino’s brief objecting to Judge Hoppe’s ruling cites multiple cases and quotations that the court, and MSA, could not find when independently reviewing Iovino’s sources.

[…]

MSA flagged each of these discrepancies in its opposition brief and posited that they were the result of “ChatGPT run amok.” (Def.’s Opp’n Br. at 2, 5 n.1, 11, 13.) Even though Iovino provided the court supplemental authority to support her objections after MSA raised this issue, she puzzlingly has not replied to explain where her seemingly manufactured citations and quotations came from and who is primarily to blame for this gross error. This silence is deafening.

Tara Rule v. Braiman, 1:23-cv-01218 (BKS/CFH), 23 n.14 (N.D.N.Y. Sep. 4, 2024)

In her submissions, Plaintiff cited a number of cases that the Court and defense counsel were unable to find or verify […] In any future filings Plaintiff must include a full citation of any case cited […] The Court notes that “ChatGPT and similar AI programs are capable of generating fake case citations and other misstatements of law.” Dukuray v. Experian Info. Sols. (Dukuray I), No. 23-cv-9043, 2024 WL 3812259, at *11-12, 2024 U.S. Dist. LEXIS 132667, at *29 (S.D.N.Y. July 26, 2024), adopted by Dukuray v. Experian Info. Sols. (Dukuray II), 2024 WL 3936347, 2024 U.S. Dist. LEXIS 152287 (S.D.N.Y. Aug. 26, 2024).

Anonymous v. N.Y.C. Dep’t of Educ., 1:24-cv-04232 (JLR), 14 (S.D.N.Y. Jul. 18, 2024)

Defendants note that, at times, Plaintiff “cit[es] to and reli[es] on what appears to be non-existent legal authority.” Remand Opp. at 9. Having reviewed the case citations flagged by Defendants, the Court is likewise unable to locate them […] assuming that (as was true in Mata) these nonexistent cases are the product of Plaintiff using an artificial-intelligence program like ChatGPT, see 678 F.Supp.3d at 451, it is no secret that such programs can be unreliable.