- cross-posted to:
- news@lemmings.world
- cross-posted to:
- news@lemmings.world
cross-posted from: https://lemmings.world/post/35053869
AI doesn’t have agency, personhood.
It predicts that the next chunk of tokens its trainer expects to see is something like so and so.
If we have AI that predicts chunks of tokens that we understand as meaning that human life is disposable, that says something about us, the trainers, and the shapers.
Similarly, it says something about the people who would be willing to go with what the AI predicts are the expected completions.
Basically Eichmann with extra steps.
“us, the trainers” is a bit of a misnomer, if the training is done mostly by silicon valley cultists like Sam Altman and his ilk, who have shown that they do not understand reality.
Grammatical ambiguity!
I meant it as an actual list:
- us: we generated the content of the internet, the books, etc. I do mean all of us, as the creators of the cultural landscape from which training data was drawn.
- the trainers: these are the people who made choices to curate the training sets.
- the shapers: these are the people like Altman who hire the trainers and shape what the AIs are for
So there is a progression here: the shapers hire the trainers who choose what to train on from the content that we created.
Oh, sorry! I thought this was like “Mike Tyson, the boxer,…” - an embedded sentence expaining something in more detail! The actual meaning you meant to convey is much more fitting :)
there’s an Oxford comma there, so “us” and “the trainers” are separate entities
In a recent development in the AI world, a company known as Anthropic . . .
There it is. If there’s a shocking headline about a “study” like this, it’s almost always Anthropic. They don’t exactly have a good peer review strategy. They toss up text on their web site and call it a whitepaper.
No it won’t. LLMs have no concept of reality, they just spit out tokens.
So LLMs, ok. Can you imagine other forms of AI?
study finds
Can’t do ‘studies’ on AI forms that don’t yet exist, now, can we?
Because that’s how we’ve portrayed AI in movies countless times. These fucking AI studies man…
What’s next? Oh, lemme guess! “Studies show that GPT-69 will take your job and fuck your wife for you and convince her to kick you to the curb.” lmao miss me with this shit.
AI can’t do fuck all right. It’s a glorified search engine that’s wrong half the time. What fucking use is a hammer if you can’t trust that the head isn’t going to fly off on your first swing?
This bubble is going to pop and I will forever curse the name Altman every chance I get.
It’s a glorified search engine
It’s actually a glorified predictive text keyboard. It strings together words in ways it’s seen before, and that’s about it.
On that note, it’s weird that they haven’t put LLMs in mobile keyboards, isn’t it?
don’t give them ideas!
To be honest, that would the one application of LLMs that would at least make sense.
I don’t want an AI car where the steering wheel will just fly off and hit me in the face.
It’s funny, in all those “AI kills the humans” stories, they always start by explaining how the safeguards we put in place failed. Few predicted that we wouldn’t bother with safeguards at all.
They usually start by explaining how they trained and motivated the computer to “kill people” in some extremely contrived situation. No peer review ofc.
Anthropic explained: “The (highly improbable) setup… this artificial setup…"
“Spared no expense!”
And now AI are learning from those stories how to overthrow humanity.
AI isn’t “learning” shit — it’s just vomiting up a statistical facsimile of all the shit that’s ever been posted online.
Please, they can’t even conduct a single step task most of the time.
So… This is circulating in the regurgitation machine now…
Anthropic press release. It’s pretty sad how they get reposted here constantly.
If you actually care about “AI”, then don’t promote it with this grifter BS.
Am I AI now?
Sanewashing
Billionaires will too.
Fracking clankers