Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.
The issue isn’t that humans are evil, it’s that they are either dumb (do not have the throughput to learn enough), don’t have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid
That “benevolent AI through broad knowledge” idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.
As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don’t think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it’s just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn’t possibly produce intelligence (Chinese room though experiment / fallacy).
That sucks, I’m sorry. I think humans are actually pretty dang cool and good.
The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it’s good. I don’t know what I hoped to get out of this interaction, but hey, it’s happened, so, neat, I guess.
One thing I should have been more clear about during our interactions is that I’m aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that’s on me. I don’t believe the building blocks of so-called “AI” will lead to actual intelligence, but that doesn’t mean I don’t believe in complex emergent behavior, we’re all made of atoms, aren’t we?
It worries me you didn’t even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn’t make any impact, I guess? Anyway, it doesn’t matter, I’ve said my piece, please be skeptical of IQ and other “objective” measures of intelligence.
If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn’t feel like that these days, we’re killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it’s true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.
please be skeptical of IQ and other “objective” measures of intelligence
Haha that is a bit ironic when I’m arguing for and you against GPT showing any signs of intelligence.
And academically there is nothing wrong with trying to objectively measure one of the many aspects of intelligence. The reason why it’s problematic in general is ironically because people are too stupid and infer cognitive biases from negligible differences. And I guess you are trying to infer I have some such deplorable or immature “mental infrastructure”. I’m only interested understanding the “anti AI” thinking better.
And yeah humans are awesome and intelligent and worthy - in the right conditions! It’s the rules, systems, institutions, education, (mis)information and material conditions and power imbalances that are fucking us up. AI might be a lever that can help us.
God damn you’re infuriating. You think I’m using “objective” measures of intelligence when I say so-called “AI” isn’t intelligent? Those “objective” measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM’s only regurgitate. Isn’t that true? (To be clear, I don’t expect you to agree that LLM’s don’t think, I’m asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)
Tell me, what are the “aspects” of intelligence you want to “objectively” measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It’s fucking bigotry, not stupidity, fucking hell. Unless you’re going to argue that bigotry arises from stupidity, in which case, well, you’ve got a lot to learn.
I don’t think you’re deplorable, although I do think you might be a little immature, but I’m not going to push on that point, because I don’t really care. I don’t think you’re lesser in any way. I think you’re mistaken, that doesn’t mean less than. You’re as deserving of a decent life as I am, and I truly hope you’re living one, and continue to do so in the future.
But I’m really done with this conversation. Feel free to get the last word in, I likely won’t respond. Please know I bear you no ill will, even though I firmly believe you’re entirely and completely wrong about so-called “AI”.
Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.
The issue isn’t that humans are evil, it’s that they are either dumb (do not have the throughput to learn enough), don’t have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid
That “benevolent AI through broad knowledge” idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.
As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don’t think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it’s just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn’t possibly produce intelligence (Chinese room though experiment / fallacy).
That sucks, I’m sorry. I think humans are actually pretty dang cool and good.
The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it’s good. I don’t know what I hoped to get out of this interaction, but hey, it’s happened, so, neat, I guess.
One thing I should have been more clear about during our interactions is that I’m aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that’s on me. I don’t believe the building blocks of so-called “AI” will lead to actual intelligence, but that doesn’t mean I don’t believe in complex emergent behavior, we’re all made of atoms, aren’t we?
It worries me you didn’t even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn’t make any impact, I guess? Anyway, it doesn’t matter, I’ve said my piece, please be skeptical of IQ and other “objective” measures of intelligence.
If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn’t feel like that these days, we’re killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it’s true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.
Haha that is a bit ironic when I’m arguing for and you against GPT showing any signs of intelligence.
And academically there is nothing wrong with trying to objectively measure one of the many aspects of intelligence. The reason why it’s problematic in general is ironically because people are too stupid and infer cognitive biases from negligible differences. And I guess you are trying to infer I have some such deplorable or immature “mental infrastructure”. I’m only interested understanding the “anti AI” thinking better.
And yeah humans are awesome and intelligent and worthy - in the right conditions! It’s the rules, systems, institutions, education, (mis)information and material conditions and power imbalances that are fucking us up. AI might be a lever that can help us.
God damn you’re infuriating. You think I’m using “objective” measures of intelligence when I say so-called “AI” isn’t intelligent? Those “objective” measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM’s only regurgitate. Isn’t that true? (To be clear, I don’t expect you to agree that LLM’s don’t think, I’m asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)
Tell me, what are the “aspects” of intelligence you want to “objectively” measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It’s fucking bigotry, not stupidity, fucking hell. Unless you’re going to argue that bigotry arises from stupidity, in which case, well, you’ve got a lot to learn.
I don’t think you’re deplorable, although I do think you might be a little immature, but I’m not going to push on that point, because I don’t really care. I don’t think you’re lesser in any way. I think you’re mistaken, that doesn’t mean less than. You’re as deserving of a decent life as I am, and I truly hope you’re living one, and continue to do so in the future.
But I’m really done with this conversation. Feel free to get the last word in, I likely won’t respond. Please know I bear you no ill will, even though I firmly believe you’re entirely and completely wrong about so-called “AI”.
Anthropic has developed an AI ‘brain scanner’ to understand how LLMs work and it turns out the reason why chatbots are terrible at simple math and hallucinate is weirder than you thought