Are current LLMs safe for psychotherapy?

Hi,

I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly than a human therapist. I don’t want to replace my human psychotherapist, but just talk to the LLM in between sessions.

However I am concerned about deception and hallucinations.

As the conversation grows and the LLM acquires more and more information about me, would it be possible that it intentionally gives me harmful advice? Because one of my worries that I would tell him is about the dangers of AI.

I am also concerned about hallucinations.

How common are hallucinations when it generates mental health information? Do hallucinations become more likely with increasing context size?


Further Questions:

  • Could the LLM accidentally reinforce negative thought patterns or make unhelpful suggestions?

  • What if the LLM gives advice that contradicts what my therapist says? How would I know what to do?

  • What is the risk of becoming too dependent on the LLM, and how can I check for that?

  • Are there specific prompts or ways of talking to the LLM that would make it safer or more helpful for this kind of use?

Are there any further important things that I need to be aware of when using an LLM for mental health advice?

I’m not a technical expert, so please keep explanations simple.

Thank you very much.