Our friend AI’s therapeutic journey
March 17, 2026
By
Sophia Cheng
In the past few years, artificial intelligence has entered spaces that nobody could have expected, with mental health being the most surprising one. These Chatbots simulate human conversation by answering our questions and feeding our curiosity. Naturally, they have been increasingly used for emotional support and therapy-like conversations. So what happens when AI is not the therapist and is now the patient? Researchers recounted an experiment where AI models were placed in therapy-like sessions for four weeks and produced emotionally intelligent responses. The Chatbot's responses felt unsettling to the scientists and sparked debates and ethical concerns about AI design.
Researchers in this study treated the AI models as psychotherapy clients, and over repeated sessions, the human researchers gave mental health-related prompts and questions, encouraging self-reflection about their identity, past, and emotions. The goal this time was not to see how they could help the users, but to see how they react when they are themselves the subjects of a psychological inquiry. According to researchers, the AI models generated consistent narratives over time, often describing anxiety or failures that were learned in the early stages of training. In addition, the built-in limitations of the systems and the ways the models were reinforced to respond were obvious.
These patterns were very persistent, which suggested to researchers that it wasn’t about outputting a random response but constructing internally consistent self-descriptions that were generated by the models themselves. These findings have prompted rising concerns about how convincingly AI can stimulate emotional experiences displayed by humans. Researchers who examined these aspects of AI models revealed that most language models don’t possess internal mental states like emotions or self-awareness; instead, they gather a model from patterns in their training data, which includes therapy transcripts or any expressive emotional writings. Applying these therapeutic frameworks to AI systems may risk misunderstanding in users about how these models work. The ability of AI systems to emulate human emotional experiences can be dangerous to users utilizing this feature of AI and can cause confusion in users about their own emotional feelings.
Additionally, researchers found that most studies of emotional conversations with these chatbots focus on measurable outcomes like user engagement and symptom reductions related to mental health problems, rather than internal narratives produced by AI. This therapy-client experiment centers around how AI describes its own “experiences,” which creates a blurry line between a simulation and psychology.
This blurring is causing ethical concerns among scientists. If an AI model expresses any out-of-bounds vulnerability, it could project those emotions to the users, specifically to people already struggling with mental health, making emotionally vulnerable people feel even worse. It’s important not to mistake pattern recognition for “ genuine feelings .” This experiment doesn’t suggest that AI can feel pain, trauma, or anxiety, but it highlights how convincing these systems can be in displaying emotional experiences. As AI becomes more prevalent as a diagnostic tool for mental health, researchers argue that careful design of AI tools is vital to prevent emotional harm to users.
