AI hallucinations news moved from courtroom embarrassment to career consequence as the Nebraska Supreme Court ruled to suspend Omaha attorney Greg Lake until further notice, after a court brief he submitted in a divorce appeal contained 57 defective citations out of 63, including 20 fully fabricated case references and four completely invented cases that do not exist in any jurisdiction.
- Lake initially told justices he had uploaded the wrong version of the brief while traveling on his wedding anniversary with a broken computer, but later admitted to using AI, which the Nebraska Counsel for Discipline found constituted a failure of candor toward the court.
- The case originated from a 2025 divorce trial disputing the timing of asset division and child custody, with Lake filing the appeal brief on behalf of the husband, the brief then riddled with fictitious details from real Nebraska cases and wholly invented authorities.
- The Nebraska Supreme Court’s opinion noted that the mistakes “could have been easily discovered using traditional legal research services” and described the case as presenting “a novel issue for Nebraska courts,” serving as a public warning to all attorneys in the state.
AI hallucinations news has produced its most severe professional sanction in the United States to date as the Nebraska Supreme Court handed down an indefinite suspension of attorney Greg Lake on April 15, after months of proceedings that began when justices at oral argument in February noticed the brief contained errors they could not reconcile with any published Nebraska case law.
The brief had been filed in a divorce appeal disputing the effective date for dividing marital assets and child custody. Of 63 citations Lake made, 57 contained some form of defect. Twenty were what courts now call hallucinations: realistic-seeming but entirely fabricated references generated by an AI model that guessed plausibly at what the user was asking for and produced convincing-looking but nonexistent citations.
What Happened at Oral Argument and After
When a justice asked Lake during the February hearing how the errors had occurred, he said he had been on his 10th wedding anniversary, his computer had broken while traveling, and he had uploaded the wrong version of the brief. The justices found the explanation unconvincing. The Counsel for Discipline investigated and found a different account: that Lake had used AI to draft the brief and then denied it to the court, which constituted a violation of professional conduct rules requiring candor toward the tribunal.
The Nebraska Supreme Court’s unanimous opinion rejecting the brief and referring Lake for discipline in March stated plainly: “AI, like other technological tools, can be a benefit to the legal community, but it must be used with caution and humility.” The court called the errors easily preventable with basic verification through standard legal research platforms and found that Lake had shown a failure of his duty of candor.
The Broader Sanctions Landscape
Nebraska is not an isolated case. Researcher Damien Charlotin at HEC Paris, who maintains a database of AI hallucination cases in legal proceedings, now tracks more than 1,200 such cases globally, with approximately 800 from US courts. He has described the pace as reaching “ten cases from ten different courts on a single day.”
Oregon holds the largest aggregate sanction tied to a single attorney for AI-related filing errors, at $109,700. The Sixth Circuit imposed a $30,000 fine on two Tennessee attorneys, the largest federal appellate sanction yet linked to fabricated citations. Nebraska’s indefinite suspension, if upheld on any appeal, would be the first bar discipline action to suspend practice entirely over AI-related filing errors in the US, escalating the consequences from financial penalties to career suspension.
Why This Matters for the AI Sector
Each high-profile legal sanction tied to AI models sends a regulatory signal that calibrates how AI tools are permitted to be used in professional settings. For AI risks assessments across the investment and technology landscape, the legal profession’s response to hallucinations is the canary in the coalmine: it defines what “responsible deployment” means in the first high-stakes regulated environment to produce consequences. The AI tokens and AI infrastructure markets will face analogous regulatory frameworks as the same logic extends to financial advice, medical decisions, and government applications of the same underlying models.

