It’s funny, in all those “AI kills the humans” stories, they always start by explaining how the safeguards we put in place failed. Few predicted that we wouldn’t bother with safeguards at all.
They usually start by explaining how they trained and motivated the computer to “kill people” in some extremely contrived situation. No peer review ofc.
Anthropic explained: “The (highly improbable) setup… this artificial setup…"
It’s funny, in all those “AI kills the humans” stories, they always start by explaining how the safeguards we put in place failed. Few predicted that we wouldn’t bother with safeguards at all.
They usually start by explaining how they trained and motivated the computer to “kill people” in some extremely contrived situation. No peer review ofc.
“Spared no expense!”
And now AI are learning from those stories how to overthrow humanity.
AI isn’t “learning” shit — it’s just vomiting up a statistical facsimile of all the shit that’s ever been posted online.