160 times.
160 times.
That's how many documented cases there are of lawyers submitting AI-fabricated citations to courts. Roughly two new ones a week since Mata v. Avianca blew up in 2023. And it just happened again in a California dog custody case.
The pattern never changes. Lawyer uses ChatGPT to draft a brief. ChatGPT invents cases that don't exist. Opposing counsel or the judge catches it. Sanctions follow.
And these aren't solo practitioners fumbling with new tech. Morgan & Morgan, K&L Gates, firms with hundreds of attorneys. Courts have handed down fines up to $31,000, suspended lawyers from practice, and started contempt proceedings. One ruling even said opposing counsel now has a duty to catch the other side's fake citations. That's how bad it's gotten.
The tool sounds confident. It writes like a lawyer. It is still making things up.
This isn't a lawyer problem. It's an AI problem. Anyone using these tools for work that carries consequences should be doing three things:
- Verify every source, citation, and statistic before it leaves your desk. If the AI wrote it, confirm it exists.
- Build a check step into your workflow. Don't treat AI output as a first draft. Treat it as a rough suggestion.
- Know what the tool is bad at. LLMs fabricate references, invent data points, and hallucinate quotes. Plan for that.