Search
If You’ve Been Accused of a Crime, Be Careful What You Tell AI
People now ask ChatGPT and Claude everything, including what to do after an arrest, whether the police can prove a case, and how to explain suspicious facts. If you have been accused of a crime, that can be a serious mistake. A recent federal court opinion shows why people should be very cautious before typing case facts, strategy, timelines, or explanations into a consumer AI platform.
A recent opinion from the Southern District of New York, United States v. Heppner, addressed whether a criminal defendant’s communications with the AI platform Claude were protected by the attorney-client privilege or the work-product doctrine. On the facts before it, the court said no. The Harvard Law Review’s discussion of the decision is worth reading, and helped inspire this article.
The practical lesson is straightforward. AI is not your lawyer. A public AI platform is not the same thing as a confidential legal channel. If you are under investigation, worried about charges, or already facing prosecution, you should assume that discussing your case with AI can create risks your lawyer would rather have avoided.
What did the court decide in United States v. Heppner?
Judge Jed Rakoff described the issue as a question of first impression nationwide: whether communications with a publicly available AI platform made in connection with a criminal investigation are protected by attorney-client privilege or the work-product doctrine. The court held that they were not, at least on the facts presented there.
According to the opinion, the defendant used Claude on his own initiative. His lawyers argued that he had entered information he learned from counsel and that he later shared the AI-generated material with counsel. But counsel also conceded that they did not direct him to run the Claude searches. The court treated that fact as important.
It rejected both privilege and work-product protection, reasoning that the communications were not protected lawyer-client communications and were not prepared by or at counsel’s direction.
The part most people will miss is this: later sharing the material with your lawyer may not fix the problem. The court explained that non-privileged communications do not become privileged just because they are later provided to counsel.
Are ChatGPT conversations protected by attorney-client privilege?
Not automatically, and that is the point people need to understand. Attorney-client privilege protects confidential communications between lawyer and client made for the purpose of obtaining legal advice. A consumer AI platform is not your lawyer. If you voluntarily disclose facts, strategy, admissions, or documents to a third-party AI system, a court may conclude that you did not make a privileged legal communication at all. That is the warning Heppner delivers.
This does not mean every AI-related communication in every setting will always be treated the same way. The Heppner court noted that things might at least arguably look different if counsel had directed the use of the tool as part of legal representation. But that was not the situation there, and it is not a safe assumption for someone to make on their own. If the privilege question is even debatable after you have already disclosed damaging facts to AI, you have created a problem your lawyer may now need to manage.
Why this matters even more in criminal cases
Criminal cases often turn on statements, timing, intent, knowledge, credibility, and inconsistency. People under stress often use AI as a private sounding board. They type in what happened, what they think the police know, why they said something, why they deleted something, or what defense might work. That can be dangerous because your own words are often the evidence.
What feels like harmless brainstorming can later look like something very different. A prosecutor may try to frame it as an admission, a changing story, consciousness of guilt, witness tampering preparation, or strategic tailoring of facts. In a criminal case, those risks matter whether the allegation involves drunk driving, a sex offense, a gun charge, or a federal investigation.
If you are facing one of those situations, these pages explain the broader defense context: Michigan criminal defense lawyer, Michigan DUI lawyer, Michigan sex crimes lawyer, Michigan gun crimes lawyer, and Michigan federal criminal defense lawyer.
What should you never share with ChatGPT or Claude about a criminal case?
- Do not paste your version of events into AI and ask whether you committed a crime.
- Do not upload police reports, witness statements, text messages, photos, body-camera summaries, medical records, search histories, or discovery.
- Do not ask AI to rewrite an explanation you plan to give to police, a prosecutor, probation, an employer, a licensing board, or a witness.
- Do not test alternate timelines or theories and assume those experiments are private.
- Do not use AI as a substitute for confidential legal advice.
The core problem is not that AI is always inaccurate, though it can be. The problem is that you may be disclosing legally significant information to a third-party system outside the attorney-client relationship, and a court may treat that very differently from a conversation with counsel. Heppner shows exactly why that matters.
What readers usually miss about AI and criminal defense
Most people assume privacy based on how the interaction feels. They are alone, typing into a box, asking questions they would be embarrassed to ask another person. That creates a false sense of confidentiality.
But courts do not decide privilege based on how private something feels. They look at the legal character of the communication, who the communication was with, whether confidentiality was preserved, and whether the communication was made within a protected legal relationship. The Heppner court analyzed the issue through ordinary privilege principles, not through marketing language about AI assistants. That is what makes the opinion important.
Our firm’s view
At Barone Defense Firm, we view this as an early-warning issue. Long before a case gets to trial, people accused of crimes often do avoidable damage by trying to work the case out on their own. Sometimes they talk too much to police. Sometimes they text the wrong person. Now, increasingly, they disclose facts and strategy to AI.
That is not prudent damage control. It is often the opposite. The better course is disciplined early intervention. If you are under investigation or have been charged, stop discussing the facts with AI and speak directly with defense counsel.
That is especially true in higher-stakes matters involving operating while intoxicated allegations, sex crimes, gun offenses, or federal charges, where words, timelines, and state of mind can become central to the prosecution’s theory.
What should you do if you already used AI after an arrest or investigation?
Stop using it for that purpose. Do not continue refining prompts. Do not try to clean up what you already wrote. Do not assume deletion solves the problem. Do not make independent decisions about what matters and what does not.
Instead, tell your lawyer promptly and completely what you shared, when you shared it, what platform you used, and whether you uploaded any documents or images. Your lawyer can then assess the problem as part of the defense strategy. What usually makes things worse is continued unsupervised use of AI after the legal issue has already started.
Can police or prosecutors get your AI conversations?
That question depends on facts, platform practices, legal process, and the procedural posture of the case. A careful lawyer should not overstate the answer. But the safe assumption is not that your AI chats are protected. The lesson from Heppner is that you should not rely on attorney-client privilege or work-product doctrine to rescue self-directed disclosures to a public AI system.
Final takeaway
If you would hesitate to say it to the police, do not type it into AI. If you are under investigation, have been arrested, or are worried that something you already shared with AI could become part of the government’s case, get legal advice before taking another step.
At Barone Defense Firm, early intervention often makes the difference between a manageable problem and a much more difficult one. Contact us before you make the case harder than it needs to be.
Frequently asked questions about ChatGPT and attorney-client privilege
Are ChatGPT conversations privileged?
Not automatically. A recent federal decision in United States v. Heppner held that, on the facts before the court, a defendant’s communications with the AI platform Claude were not protected by attorney-client privilege or the work-product doctrine.
Can police subpoena ChatGPT conversations?
That depends on the platform, the facts, and the legal process involved. The safer assumption is that you should not treat AI chats as if they were confidential lawyer-client communications.
What if I already told AI about my criminal case?
Stop using AI for that purpose and tell your lawyer what you shared. Do not try to handle the issue yourself.
Is AI ever a substitute for a criminal defense lawyer?
No. AI can generate language, summarize information, and imitate analysis, but it is not a confidential legal advisor and should not be treated like one in a criminal case.
About the Author
Patrick T. Barone is a Michigan criminal defense lawyer and senior partner at Barone Defense Firm. He is an IACP/NHTSA certified SFST instructor and practitioner, a judicially qualified SFST court expert, believed to be the only Michigan attorney so qualified, and the author of five books including Defending Drinking Drivers. He has been recognized by Michigan Super Lawyers, Best Lawyers in America, and Leading Lawyers. This article is intended as general information, not legal advice for any specific case.
For broader commentary on AI, evidence, and legal strategy, readers can also visit Patrick Barone’s AI on Trial on Substack.
Michigan Criminal Defense Lawyer Blog

