Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.
In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”



Whats wild is the ai training sites like data annotation spent years already trying to santizie the ai,my first year of projects was just checking if the ai said anything fd up or would encourage you in negative directions (those barely paid shit tho)
I’ll always be pro llm personally, I only have issues with generative ai, shit like chatgpt is so useful for basic sht, which is all I need 90% of the time, as long as I don’t get caught in a lopp trying to get the right answer when it doesn’t have it, I genuinely feel minimal empathy for ppl over 20 who think they are talking to a sentient being, sorry, can’t relate, it’s very clearly hallucinating.
In the end this is user error, the same mf couldve downloaded an open source local model to talk to and done the same thing
Ehh, most people are not that tech literate. Combine that with on demand sycophant as a service and it’s a match made in hell.
You’re right. I always gauge ppl off myself, putting myself at the bottom assuming everyones knows than me, imposter syndrome skews my perspective