I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
Are they genuinely intelligent?
Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
Collect underpants
?
Profit!
It looks more like:
Use neural network training to construct large language models.
?
Artificial general intelligence!
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
believing that a large language model has any kind of awareness or actual intelligence is absurd
I (as a person who works professionally in the area and tries to keep up with the current academic publications) happen to agree with you. But my credences are somewhat reduced after considering the points Hinton raises.
I think it is worth considering that there are a handful of academically active models of consciousness; some well-respected ones like the CTM are not at all inconsistent with Hinton’s statements
I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
It looks more like:
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
I (as a person who works professionally in the area and tries to keep up with the current academic publications) happen to agree with you. But my credences are somewhat reduced after considering the points Hinton raises.
I think it is worth considering that there are a handful of academically active models of consciousness; some well-respected ones like the CTM are not at all inconsistent with Hinton’s statements