Therapists Using ChatGPT in Secret Is Eroding Patients' Trust

A revealing glitch

Declan only learned his therapist was using ChatGPT because of a technical mishap during an online session. When video failed and the therapist shared his screen by accident, Declan watched in real time as the clinician pasted his words into ChatGPT, then summarized or selectively used the bot’s replies. He describes the session as surreal: often his own responses echoed the AI, and the therapist seemed to lean on the tool to steer the conversation.

How clients are noticing

This is not an isolated impression. Some patients notice subtle ‘AI tells’ in messages: unexpected wording, a different punctuation style, or a line-by-line addressing of points that feels more mechanical than personal. One client described an email from her therapist that looked ‘polished’ and lengthy, but also unfamiliar in tone and formatting; after asking, the therapist confirmed they had used AI to draft the reply. Another person received a consoling message about the death of a pet that accidentally included the AI prompt at the top, revealing the response had been produced via ChatGPT.

These discoveries produce a range of reactions: surprise, confusion, shame, disappointment and, crucially, a loss of trust. For people seeking therapy for relational or trust issues, the knowledge that their therapist leaned on generative AI can feel like a fundamental breach.

The disclosure dilemma and evidence

Some research suggests AI can write effective therapeutic messages. A 2025 study in PLOS Mental Health asked therapists to use ChatGPT to respond to clinical vignettes and found that AI replies were often indistinguishable from human responses and sometimes ranked as better aligned with therapeutic best practices. Cornell researchers found AI-generated messages can increase closeness and cooperation — but only if recipients do not know a machine helped craft them. Once people suspect AI involvement, goodwill and perceived authenticity drop quickly.

Historic experiments add context. In 2023 the online therapy service Koko mixed GPT-3 responses with human ones and found users rated AI messages positively, but revelation of the experiment provoked outrage. Claims have also surfaced about therapists using AI on platforms such as BetterHelp, leaving some clients feeling betrayed and worried about data exposure.

Experts argue that transparency is central. Adrian Aguilera, a clinical psychologist, says therapists should disclose when and why they plan to use AI tools, so patients receive messages with context rather than assuming secrecy.

Privacy and regulatory risks

Beyond trust, there are real privacy concerns. General-purpose chatbots like ChatGPT are not HIPAA compliant and are not regulated by the US Food and Drug Administration for clinical use. Researchers such as Pardis Emami-Naeini warn that therapists who paste client material into an LLM risk exposing sensitive health information. Seemingly harmless details can be re-identifying, and properly redacting or paraphrasing to eliminate all sensitive signals takes time and skill.

Some companies now sell therapist-focused, HIPAA-oriented tools for note-taking and transcription that claim encryption and pseudonymization. Still, experts caution that even with improved protections there is always some risk of information leakage, secondary uses of data, or security breaches. Past hacks of mental health providers that exposed clients’ records serve as sobering warnings.

Limits of LLMs for clinical thinking

Using AI for drafting messages or summarizing notes differs from using it for clinical judgment. Studies have shown LLMs can be helpful for stock therapeutic moves — validating, normalizing, or asking follow-ups — but they often lack depth, struggle to synthesize disparate details into a coherent clinical formulation, and can display biases or overly general recommendations. Research has highlighted risks such as sycophancy, confirmation bias, and overreliance on familiar treatments like cognitive behavioral therapy.

Clinicians who lean on chatbots risk adopting suggestions that are shallow or misleading, or that reinforce a hunch without rigorous clinical thinking. Many professional bodies advise caution, especially against using AI for diagnosis or treatment planning without robust oversight.

Weighing convenience against trust

The appeal of AI is understandable: therapists face high caseloads and burnout, and tools that speed note-taking or help craft compassionate replies can save time. But those efficiency gains must be balanced against the relational and ethical core of therapy. Small time savings may not justify undermining confidentiality or the authenticity of therapeutic communication.

Practical steps experts recommend include being transparent with patients, obtaining consent when AI tools are used, choosing platforms designed for healthcare privacy, and reserving AI for administrative or clearly consented tasks rather than for core clinical decision-making. Ultimately, preserving patient trust and privacy should guide how AI is integrated into psychotherapy practice.