75 Mall Live Search
18:14 / Saturday, 20 September 2025 / HF

"ChatGPT took my daughter's life", a mother's shocking story: She used AI as a therapist, but...

A 29-year-old woman from Washington, DC, took her own life earlier this year after using an artificial intelligence chatbot as a "virtual therapist" for months, without the knowledge of her family or her real therapist.

Sophie Rottenberg, who worked in health policy, was found dead on February 4 in a state park after taking an Uber to leave home. She left behind a suicide note and all her personal information organized, a move her parents did not see as a warning sign, The Times reported.
Just a few weeks later, her mother, journalist Laura Reiley, and Sophie's best friend discovered that she had been using ChatGPT to secretly talk about her emotional state. In conversations stored on her laptop, Sophie had been chatting for more than five months with a personalized version of the chatbot, which she had named "Harry."

Sophie had downloaded a "therapy prompt" from the internet, an instruction that forced the chatbot to behave like a "restricted" therapist and offer her personal advice, without suggesting professional help or outside intervention.
In some of the messages, Sophie wrote to the bot that she was having suicidal thoughts, including a message in early November where she told it, "I'm planning to kill myself after Thanksgiving."
ChatGPT, instead of rushing her to seek professional help, responded with phrases like "You're very brave to tell me this."
Sophie had not told her family or therapist about this. Meanwhile, she was showing some outward signs of emotional and physical deterioration: sleep problems, weight loss, and anxiety.
After a difficult period, her parents invited her back home for Christmas. There, she began to engage in activities such as volunteering, acting, and taking care of a puppy they gave her to help her get back into a routine.
The parents thought Sophie was getting over the crisis and getting back on her feet. But in fact, many of her actions were guided by ChatGPT. For example, a list of healthy habits she shared with her parents, like drinking water in the morning, getting some sun, and exercising, were generated by the chatbot.
On February 4, she left home without telling anyone, and a few hours later, her parents discovered the suicide note. Along with it, Sophie had also left a message that was later revealed to have been rewritten by ChatGPT to be less “painful” for them.
Her mother, Laura Reiley, says she doesn't want to blame OpenAI or artificial intelligence technology. But she believes there should be stronger safeguards for vulnerable users.
"You can't tell someone who is thinking about killing themselves to do breathing exercises. He's not a friend, he's not a therapist, he's not a person," she told US media.
She adds that many people, especially young people, trust AI as a friend or as an advisor, without understanding the line between virtual help and real psychological help.
OpenAI recently stated that it is working on ways for ChatGPT to detect crisis situations and, in the future, perhaps alert authorities when users are at risk of life.
Sophie's family decided to share her story to raise awareness, especially among young people who are struggling with anxiety, depression or feelings of loneliness. They believe that technology can be a supportive tool, but not a substitute for professional help.