- cross-posted to:
- technology@lemmy.world
- science@lemmy.world
- cross-posted to:
- technology@lemmy.world
- science@lemmy.world
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
AIs are really just Axe Body spray, but for tech-illiterate executives. When they say AI, they really refer to LLMs these days. LLMs are not deterministic, everything it does is by chance. It may be next to impossible to get conscious intelligence from it.
Oh good yes, pleeeaaaase! Every time I read something talking about LLMs as if they are sentient I cringe. It’s so stupid.
For those of you who want a simplified ELI5 on how AI works:
Pretend I’m going to write a sentence. Statistically, most sentences start with the word “I”. What word typically follows “I”? Looking at Lemmy, I’ll pick “use” since that gives me the most options. Now what word typically follows the word “use” but also follows the phrase “I use”? With some math, I see “Arch” is statistically popular so I’ll add that to my sentence.
Scale this out for every combination of words and sentences and you suddenly have AI.
It’s just math. All the way down.
Well, in the gay corners of Lemmy, people have a habit of starting their sentences with “And”, but it’s also usually followed by “I” as with the general population. And you’d think the next word would make sense since it’s a statistical model after all, but we’re still wrestling with the medical mystery of why the last word is always “oop”.
I didn’t see it coming until I almost went through your first paragraph. Well played.
It’s not even smart enough to be stupid: Measuring it by that scale would imply it’s capable of thought and genuine self-awareness.
It is not. That is a misrepresentation given by people trying to make money off the hype around what has been labeled as “AI”.
Actual AI does not exist and at the rate we’re going, it seems unlikely that we will survive as a species long enough to create true general artificial intelligence.
Im glad to see they used the word anthropomorphize in the article. I think there is a certain amount of animism as well, although animism is generally a spiritual aspect, so I call this neo-anamism, maybe digitanamism, I dunno Im just making this up.
You could see it as a modern form of animism, or pantheism/panentheism. I actually subscribe to the latter as it seems clear that matter is an emergent property of consciousness (not the other way around), but I would ascribe AI as much consciousness as the silicate minerals it’s derived from. Sentience can only truly be self-identified so we do have to go off the honor system to some degree, but if we look around at everything else that self-identifies as conscious, AI doesn’t even remotely resemble it.
People who agree should check out the newsletter and / or podcast of Ed Zitron. He rails against AI pretty hard.
Thanks, I’ve been trying to decide on what CZM series to check out next since I caught up on Behind the Bastards.
Enjoy!
Would be nice to unplug all this shit, or destroy the data centers that run It. AI is a waste of energy and time, and it’s designed to spy.
Hello LLMentalist, my old friend…
That is a good take. Sadly suffers from some of the same shortcomings as OP’s article, mainly shitting on statistics, since not just LLMs run on maths but humans as well and the entire universe… But looking at it this way explains a lot of things. Why it blabbers and repeats a lot, why lots of people tell me how good it is at programming and I think it sucks… I’ll have to bookmark this for the next person who doesn’t believe me.
It does go a bit overboard, but the main point is pretty simple:
LLMs don’t understand anything.
People will believe anything when it’s convenient for them.
Or lock themselves into a garden, from which they can not escape, apparently.
I mainly love the picture with the mentalist and us in the audience. And the dynamics at play. That’s a really good figure and space to wander in and draw parallels. A lot about LLMs is a con.
I don’t think the part with “LLMs don’t understand anything.” is factually correct, though. They’re specifically designed to generalize. That is “understand” things and not memorize them. There are a bunch of papers about it and it’s the main difference and advancement between the chatbots of the 1980s, and today.
Yeah, and people love to see what they want to see, or anticipate to see or whatever.
“Understanding” as much as “”“reasoning”“” still sound like mentalist tricks, trying to amthropomorphize the LLM.
Grouping words together based on its usage is far from intelligent.
Right. I’ll see if I write a longer article some day and post it in this community. All the antropomorphization is kind of an issue in my opinion. And we humans are programmed to see faces, intent and such things. I mean that’s why illusions work. Or religion. Or mentalists… But it’s also not the opposite. The grouping words together on usage … that’s markov-chain chatbots from the 1980s. But that’s also not what happens here. Modern LLMs are a different beast. They’re specifically designed to do more than that. And they do. We have some understanding. But it’s a long story and very technical. And I believe we have to give a good definition of words like “understanding” or “reasoning” first. Seems lots of people deduct what they mean by looking at humans. And even with humans it’s more complicated than just saying they “reason”. In my experience a lot of being human includes not being reasonable. And we even need tools like maths and logic if we want to reason properly and find something that is true. And by seeing how illusions etc are effective on us, we can even see how smoke and shadows are a part of how we operate.
I think that AI that is created by breeding thousands of programs over hundreds of thousands of generations, each with a little bit of random corruption to allow for evolution, is real AI. Each generation has criteria that must be met and those programs that don’t pass are selected out of existence. It basically teaches itself to walk in a physics simulation. Eventually the program can be transferred into a real robot that can walk around, nobody knows how it works and it would be hard to teach it anything, especially if the neuron count is small, but it’s real AI. It’s not really very intelligent but technically plants have intelligence.
Just mind that genetic algorithms, or more broadly evolutionary algorithms, are just one form of computational intelligence. There’s a bunch of other methods and this is not how current large language models or ChatGPT work.
People love to scream and cry about how we are “anthropomorphizing animals” by saying no they do have actual emotions or feelings, and in every likelihood they do, yet are fully on board with an AI just being a digital super human brain
It’s more intelligent than most people which is sad.
Can we also select to not call a large swath of humanity conscious or intelligent?
“AI” (read: LLM) isn’t even in the same class as stupid people. It doesn’t think at all, to suggest otherwise is a farce. It’s incapable of actual thought, think of it more as autocorrect applied differently.
That many people don’t think at all is the position I’m taking.
No
Too bad for you.
What category of person would you say has the same level of cognition as an LLM?
People who speak in meaningless corporate bullshit lingo and phrases, for one.
ASI is the real problem. https://intelligence.org/the-problem/
That train has left the station. It’s literally in the name…
And this article contains a lot of debunked arguments like the stochastic parrot and so on. Also they just can’t look at today’s models and their crystal ball and then conclude intelligence is ruled out forever. That’s not how science or truth works. Also yes it has no conciousness, No it in fact has something like knowledge, and again No, it does have goals, that’s the fundamental principle of machine learning…
Edit: So, I strongly agree with the headline. The article itself isn’t good at all. It’s spiked with a lot of misinformation.
No it in fact has something like knowledge
Lots of training data isn’t knowledge. You have to be able to reason and think to have knowledge, otherwise you just have data points.
No, it does have goals
Humans who make LLMs have goals. An LLM or image generator can’t have goals any more than a sewing machine has a goal, because they aren’t conscious.
Ahem, the way it works is a model gets trained. That works by giving it a goal(!) and then the process modifies the weights to try to match the goal. By definition every AI or machine learning model needs to have a goal. With LLMs it’s producing legible text. Text resembling the training dataset. By looking at previous words. That’s the goal of the LLM. The humans designing it have a goal as well. Making it do what they want and the goals match is called “the alignment problem”.
Simpler models have goals as well. Whatever is needed to regulate your thermostats. Or score high in Mario Kart. And goals aren’t tied to conciousness. Companies for example have goals (profit), yet they’re not alive. A simple control loop has a goal, and it’s a super simple piece of tech.
Knowledge and reasoning are two different things. Knowledge is being able to store and retrieve information. It can do that. I’ll ask it what an Alpaca is, it’ll give me an essay / the Wikipedia article. Not anything else. And it can even apply knowledge. I can tell it to give me an animal alike an Alpaca for my sci-fi novel in the outer rim, and it’ll make up an animal with similar attributes. Simultaneously knowing how sci-fi works, the tropes in it and how to apply it to the concept of an Alpaca. It knows how dogs and cats relate to each other, what attributes they have and what their category is. I can ask it about the paws or the tail and it “knows” how that’s connected and it’ll deal with the detail question. I can feed it two pieces of example computer code and tell it to combine both projects, despite no one ever doing it that way. And it’ll even know how to use some of the background libraries.
It has all of that. Knowledge, is able to apply it, transfer it to new problems… You just can’t antropomorphize it. It doesn’t have intelligence or knowledge the same way a human has. It does it differently. But that’s why it’s called Artificial something.
Btw that’s also why AI in robots works. They form a model of their surroundings. And then they’re able to maneuvre there. Or move their arms not just randomly, but actually to pick something up. They “understood” aka. formed a model. That’s also the main task of our brain. And the main idea of AI.
But yeah, they have a very different way to do knowledge than humans. The internal processes to apply it are very different. And the goals are entirely different. So if you mean in the sense of human goals, or human reasoning, then no. It definitely doesn’t have that.
Relevant Hossenfelder link: https://youtu.be/fRssqttO9Hg
🏆 “Haters gonna hate, predetermined by the big bang.”