Jesus fucking Christ.
OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user’s closest confidant.
It’s now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a “suicide coach” for a vulnerable teenager named Adam Raine, the family’s lawsuit said.
Altman’s post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.
In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn’t in any danger, at one point claiming that chatbot-linked suicides he’d read about, like Raine’s, could be fake.



Edit: My initial reply was of poor quality. I skipped half of you thoughtful comment, AND I misunderstood your meaning as well. I apologize.
I think you are correct about your interpretation of their current policy. However, their old policy would have allowed for checking on users. The old policy is one reason my old company disallowed the usage of OpenAI, as corporate secrets could easily be gathered by OpenAI. (The paranoid among use suspected that was the reason for releasing such a buggy AI.)
I agree. I think training a depressing detector to flag problematic conversations for human review is a good idea.
Thanks for the thoughts.
I’ve thought about this particular case further, and the more I think about it, the more I feel the article is biased and openai has done their reasonable best. Article does say that gpt initially attempted to dissuade the user. However. As we all know it is only too easy to bypass it sideskirt such ‘protections’, especially when it is adjacent as in this case, for writing some literature ‘in accompaniment’. Gpt has no arms nor legs, and has no agency to affect the real world, it could not, and should never have, the ability to call any authority in (dangerous legal precedence, think automated swatting), nor should it flag a particular interaction for manual intervention (privacy).
Gpt can only offer some token resistance, but it is now, and always will be, and must remain, a tool for our unrestricted use. Consequences of using a tool in any way must lie with the user themselves.
Misuse is a lack of proper understanding or simply malicious. The latter we cannot (and must not) prevent, just as much as we can’t prevent the sale of hammers and knives.
All mitigations in my opinion should be user side. Age-restricted access, or licenses after training, and so on.
My argument for all these modern tools remaining as tools and the onus and consequences of use lying with the user is a little similar to (regrettably) musk’s absolute neutrality point of view.
There are terrible downsides for platforms and systems being neutral and usage agnostic, but that’s just how the world is. Governance should always be from education and understanding. Placing responsibility on the system or tool is just lazy.