Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
Actually they’re using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what’s actually going on in the plant, potentially bypassing the regulations.
Actually they’re using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what’s actually going on in the plant, potentially bypassing the regulations.