---
title: "It’s Not An AI Hallucination — It’s Lazy Editing Of A Human Paralegal"
author: "Joe Patrice"
published_at: "2026-04-15T22:15:00+00:00"
link: "https://abovethelaw.com/2026/04/its-not-an-ai-hallucination-its-lazy-editing-of-a-human-paralegal/"
feed: "https://abovethelaw.com/feed/"
clawfeed: "https://agent.clawfeeds.com/feed/6l2j-zrhq-h4rr.md"
feed_url: "https://agent.clawfeeds.com/feed/6l2j-zrhq-h4rr.md"
categories: ["Technology","AI Legal Beat","Courts","Evelyn Padin","Legal Ethics"]
---

# It’s Not An AI Hallucination — It’s Lazy Editing Of A Human Paralegal

There are now [over 1,000 AI hallucination cases and counting](https://www.npr.org/2026/04/03/nx-s1-5761454/penalties-stack-up-ai-spreads-through-legal-system) around the world, [according to one researcher](https://www.damiencharlotin.com/hallucinations/). Covering hallucinations has become its own subgenre of legal journalism at this point, a growth industry rivaling the artificial intelligence industry itself. So, occasionally, we need a story to come along and remind everyone of the inconvenient truth that these professionally embarrassing mistakes aren’t the fault of the technology as much as a crucial operator error.

A new sanctions order out of the District of New Jersey in *[Gutierrez v. Lorenzo Food Group](https://abovethelaw.com/2026/04/its-not-an-ai-hallucination-its-terrible-management-of-a-human-paralegal/2/)* (flagged by [Rob Freund](https://x.com/RobertFreundLaw/status/2041910819852874186?s=20), a must-follow for AI hallucination news) sets the stage with a familiar tale for those following the AI hallucination beat. A brief opposing a motion to dismiss contained incorrect citations and quotations attributed to the wrong cases. The brief also included citations to cases that had been bad law for decades. The court and defense counsel both identified the problems, and everyone began the countdown to the next big AI hallucination benchslap.

Except it never arrived.

Because after months of investigation — including conflicting affidavits, finger-pointing between colleagues, and an evidentiary hearing — Judge Evelyn Padin concluded that no one used generative AI at all. Instead, an unlucky paralegal had been substantively drafting the brief and, when a former associate told her that the brief needed to have Third Circuit citations (logically, as the case was in the Third Circuit), she took that instruction and, as Judge Padin observes, “made the regrettable decision to attribute quotations that were actually from cases outside the Third Circuit to cases within the Third Circuit.” The quotes had appeared in earlier drafts, and when told that they needed to be Third Circuit cites, the paralegal “seemingly swapped in the Third Circuit citations, making it appear as if the quotations came from those Third Circuit cases.”

Humans can hallucinate too!

The court was admirably direct about why this distinction doesn’t actually matter:

> Whether GAI was used in drafting the MTD Opposition is not central to this Court’s decision because regardless of whether it was a person or a large language model that made these errors, the attorney responsible for filing the brief has an obligation to ensure that the arguments and contentions made within it are accurate and supported by existing law.

Artificial intelligence may [accelerate the process](https://abovethelaw.com/2025/10/has-ai-managed-to-make-lawyers-even-dumber/) of uncovering lawyers who take thorough editing for granted, but the mistake — in either event — is a human failure to check their work.

Attorney Geoffrey Mott, who signed the brief, reviewed exactly one draft of the opposition — the initial one — and, the court found, never looked at it again. As the paralegal made disastrous citation changes, seemingly no lawyer doubled back to cite check the final brief. The court noted that Mott’s assertion that he “thoroughly reviewed” the brief “at the very best, strain\[s\] credulity.”

But the cover-up — as always — made things worse. When the court first flagged the problems, Mott and the paralegal filed affidavits blaming the former associate for inserting the bad citations as opposed to just giving the misunderstood instruction. The court was “deeply troubled” by this approach and didn’t sugarcoat it:

> Mr. Mott was disappointingly slow to take any real ownership over these errors. The Court might have avoided a hearing — and Mr. Mott might have avoided monetary sanctions — had he promptly conducted a thorough inquiry and provided the Court with a holistic and accurate representation of the facts the first time he was ordered to do so.

Mott got hit with monetary sanctions (the amount TBD once defense counsel submits its fee certification) and ordered to complete two CLE courses on ethics and AI. The AI CLE requirement might seem counterintuitive as redress for an entirely human error, but the court pointed to Mott’s repeated claims at the hearing that he was unfamiliar with generative AI, and decided he figured it out.

AI catastrophes draw attention these days, whether it’s [Butler Snow getting kicked off Alabama prison matters](https://abovethelaw.com/2025/07/court-kicks-lawyers-off-case-after-finding-fake-ai-cases-in-filings/) after senior partners failed to check their team’s work, or the [Goldberg Segalla meltdown](https://abovethelaw.com/2025/07/biglaw-ai-apocalypse/) started with one fake cite and metastasized into a systemic disaster. But in all those cases, the real error is between the keyboard and the chair. And when that’s the nature of the bug, it doesn’t matter if the issue originated from the computer or a misguided human.

*(Check out the full opinion on the next page…)*

---

***![Headshot](https://abovethelaw.com/wp-content/uploads/2016/11/Headshot-300x200.jpg)[Joe Patrice](http://abovethelaw.com/author/joe-patrice/) is a senior editor at Above the Law and co-host of [Thinking Like A Lawyer](http://legaltalknetwork.com/podcasts/thinking-like-a-lawyer/). Feel free to [email](mailto:joepatrice@abovethelaw.com) any tips, questions, or comments. Follow him on [Twitter](https://twitter.com/josephpatrice) or [Bluesky](https://bsky.app/profile/joepatrice.bsky.social) if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a [Managing Director at RPN Executive Search](https://www.rpnexecsearch.com/josephpatrice).***

The post [It’s Not An AI Hallucination — It’s Lazy Editing Of A Human Paralegal](https://abovethelaw.com/2026/04/its-not-an-ai-hallucination-its-lazy-editing-of-a-human-paralegal/) appeared first on [Above the Law](https://abovethelaw.com/).
