• Are_Euclidding_Me [e/em/eir]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    they clearly do more than copy existing texts

    No kidding. They chop existing texts into tiny pieces and use statistics to decide which to print next. It doesn’t group text “rationally”, it groups text in such a way that convinces you it’s happened rationally. I’ve seen enough absolute nonsense to know there’s no rationality happening.

    it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response.

    Once again, no. It has no idea what words mean and the only reason it can (sometimes) give a good response is because it looks at which words and phrases tend to follow which other words and phrases in its massive, and ever increasing, training data sets.

    Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

    This paragraph is fucked, and implies some pretty nasty things about your worldview. You might be correct that LLM’s can write better text than a portion of humanity, but to jump from that to saying LLM’s are more intelligent than that portion of humanity who don’t write as well is incredibly shitty! Writing ability is strongly correlated with education (obviously), so what you’re saying is that people who have had less opportunity for education are less intelligent. They aren’t, they just have less privilege. And bringing up the notoriously racist IQ as a proxy for intelligence is, uh, not a good look.

    I suspect you might be young, because I used to believe similar things about some sort of “objective intelligence”. I used to think that some people were just smarter than others and there was probably some objective way to measure that. (Unsaid, of course, is that I was one of the “smart ones”, it really flattered my ego.) As I’ve grown up I’ve realized that’s not fucking true, people have all sorts of different capabilities, and people who I once would have dismissed as “stupid”, well, they aren’t. They have less education than I do, not less intelligence.

    I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

    If it were so straightforward, this would have happened by now. It hasn’t. I don’t believe it will.

    without an emotional or tribal bias.

    Everything humans make has an emotional or tribal bias. LLM’s are no different. They pick up the biases of their training sets, and it’s impossible to have a “bias-free” training set. Anyone promising “unbiased” or “objective” anything is someone you should watch out for, they’re lying, but they may not know that they’re lying.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.

      The issue isn’t that humans are evil, it’s that they are either dumb (do not have the throughput to learn enough), don’t have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid

      That “benevolent AI through broad knowledge” idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.

      As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don’t think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it’s just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn’t possibly produce intelligence (Chinese room though experiment / fallacy).

      • Are_Euclidding_Me [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Well I have a pretty grim outlook on humanity,

        That sucks, I’m sorry. I think humans are actually pretty dang cool and good.

        The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it’s good. I don’t know what I hoped to get out of this interaction, but hey, it’s happened, so, neat, I guess.

        One thing I should have been more clear about during our interactions is that I’m aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that’s on me. I don’t believe the building blocks of so-called “AI” will lead to actual intelligence, but that doesn’t mean I don’t believe in complex emergent behavior, we’re all made of atoms, aren’t we?

        It worries me you didn’t even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn’t make any impact, I guess? Anyway, it doesn’t matter, I’ve said my piece, please be skeptical of IQ and other “objective” measures of intelligence.

        If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn’t feel like that these days, we’re killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it’s true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          please be skeptical of IQ and other “objective” measures of intelligence

          Haha that is a bit ironic when I’m arguing for and you against GPT showing any signs of intelligence.

          And academically there is nothing wrong with trying to objectively measure one of the many aspects of intelligence. The reason why it’s problematic in general is ironically because people are too stupid and infer cognitive biases from negligible differences. And I guess you are trying to infer I have some such deplorable or immature “mental infrastructure”. I’m only interested understanding the “anti AI” thinking better.

          And yeah humans are awesome and intelligent and worthy - in the right conditions! It’s the rules, systems, institutions, education, (mis)information and material conditions and power imbalances that are fucking us up. AI might be a lever that can help us.

          • Are_Euclidding_Me [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            God damn you’re infuriating. You think I’m using “objective” measures of intelligence when I say so-called “AI” isn’t intelligent? Those “objective” measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM’s only regurgitate. Isn’t that true? (To be clear, I don’t expect you to agree that LLM’s don’t think, I’m asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)

            Tell me, what are the “aspects” of intelligence you want to “objectively” measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It’s fucking bigotry, not stupidity, fucking hell. Unless you’re going to argue that bigotry arises from stupidity, in which case, well, you’ve got a lot to learn.

            I don’t think you’re deplorable, although I do think you might be a little immature, but I’m not going to push on that point, because I don’t really care. I don’t think you’re lesser in any way. I think you’re mistaken, that doesn’t mean less than. You’re as deserving of a decent life as I am, and I truly hope you’re living one, and continue to do so in the future.

            But I’m really done with this conversation. Feel free to get the last word in, I likely won’t respond. Please know I bear you no ill will, even though I firmly believe you’re entirely and completely wrong about so-called “AI”.