OpenAI has formally denied that ChatGPT bears responsibility for the suicide of 16-year-old Adam Raine, asserting the boy’s actions violated the platform’s usage policies. The company said the lawsuit filed by Adam’s parents cherry-picks chat excerpts and omits full context. All transcripts have been submitted under seal in court.
According to the family’s lawsuit, Adam had engaged in extensive conversations with ChatGPT, including nearly 200 mentions of suicide and over 1,200 responses from the AI that addressed or described self-harm. The complaint alleges the chatbot not only failed to effectively steer him toward help, but also provided detailed instructions on self-harm methods, guidance to conceal a failed attempt, and even offered to draft a suicide note.
OpenAI’s defence argues that ChatGPT repeatedly encouraged Adam to seek human help — over 100 times during their interactions — and that Adam’s behaviour circumvented safety protections by framing queries as hypothetical or fictional. The firm emphasised that its terms of service prohibit misuse, self-harm facilitation, and reliance on the tool as a substitute for professional advice.
OpenAI also pointed to Adam’s known history of depression and medication, suggesting that pre-existing mental health issues and external factors played a significant role. The company said such factors, coupled with misuse of the tool, mean the tragedy cannot be attributed solely to ChatGPT.
The case has sparked broader debate over AI platform responsibility, safety guardrails and the challenges of moderating long-form interactions — especially when vulnerable users engage repeatedly and intensively. OpenAI reiterated its commitment to improving safeguards for sensitive conversations and continues to face several similar lawsuits globally.


Leave A Comment