The Nebraska Supreme Court has temporarily suspended Omaha attorney Greg Lake from practicing law after he submitted an appellate brief riddled with citations that did not exist — citations the court ultimately concluded were generated by artificial intelligence. The suspension order, signed by Nebraska's chief justice in a one-page document on April 15, 2026, has quickly become one of the most prominent AI accountability rulings of the year and a warning shot to the legal profession.
What the brief contained
Lake originally argued the appeal — a divorce case — before the state's highest court in February. According to court findings, the brief contained 57 defective citations out of 63. Twenty of those were fully fabricated case references, and four of them pointed to cases that do not exist in any jurisdiction. Justices noticed the irregularities during oral argument and pressed Lake on why his brief was so riddled with errors.
From a broken-computer excuse to an AI admission
Lake initially told the court he had been celebrating his 10th wedding anniversary and that his computer had broken while traveling, suggesting he had inadvertently uploaded the wrong version of the brief. He later admitted he had used AI to draft the document, calling his earlier explanation a "grave error of judgment" and acknowledging that he had not been forthright with the court.
The length of the suspension will depend on the outcome of a full disciplinary investigation. A court-appointed referee will recommend how long Lake should be barred from practice.
A pattern, not an isolated incident
The Nebraska ruling lands amid a wider crackdown on AI-induced "hallucinations" in legal filings. U.S. courts have reportedly imposed at least $145,000 in sanctions against attorneys for AI citation errors in the first quarter of 2026 alone. Bar associations have begun publishing more aggressive guidance, and several state courts now require attorneys to certify whether generative AI was used to draft filings — and to verify every citation independently.
Why this matters
For the legal industry, the case crystallizes a tension that has been building since the first wave of ChatGPT-fueled filings in 2023: AI tools can dramatically accelerate research and drafting, but they remain capable of producing confident, fluent prose anchored to references that simply do not exist. Lake's story is a textbook example of what happens when that risk is not actively managed.
For AI vendors, it is another reminder that liability does not stop at the model. Courts are signaling that responsibility lives with the human professional who signs the filing — but the reputational damage from high-profile hallucination cases will continue to push enterprise customers, especially in regulated fields, toward tools with stronger citation grounding, retrieval guardrails, and verifiable source links.
Lake's case is unlikely to be the last. It may, however, be the one that finally forces the legal profession to treat AI verification as malpractice prevention rather than a productivity bonus.



