• Are_Euclidding_Me [e/em/eir]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    I’ve recently had a conversation with ChatGPT about Ukraine

    What do you get out of these conversations? I’ve been trying to figure out why people enjoy talking to LLM’s, and I straight up don’t get it. What’s the point of asking an LLM about geopolitics? Do you find its analysis accurate and compelling? I certainly don’t, I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences. These LLM’s don’t actually reason, we know that, because we know how they’re constructed. So I simply don’t understand, what’s the point? I get talking to a human, even a human with a deeply contradictory worldview. That’s interesting because with humans, we know there’s a mind there, so figuring out how that alien mind works can be fascinating, especially if the person we’re talking to is quite different to us. But we know how LLM’s work, the math behind them is quite straightforward. So again I ask: what is the point in talking to an LLM? What new thing are you learning about yourself, other people, or the world at large?

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      What do you get out of these conversations?

      To test it for bias. It’s not “there yet” of course. It has breadth of knowledge, but depth and analysis is lacking of course. It’s just like one part of the brain of an idiot savant. But in the future…

      I imagine that AI could be used to parse the constant propaganda and emotional manipulation we’re bombarded with. So I was testing: With all the media in a lockstep narrative about Ukraine and GPT having been trained on all these articles, can it reason through it? Can it come to the conclusion “this is all bullshit”?

      AND IT CAN! Well only if pressed or at least coaxed a bit, but it does have the breadth of knowledge. Check out this snippet of the conversation:

      If you take academic sources and the history of US interference and invasions in for example Iraq or Afghanistan into account, what would be your own conclusion?

      If we take a realist perspective—considering historical patterns of U.S. interventions and great-power politics—then it’s hard to ignore the possibility that the West’s actions in Ukraine were driven by strategic self-interest rather than purely democratic ideals or Ukraine’s sovereignty.

      1. The U.S. and NATO Have a History of Using Conflicts for Geopolitical Gain

      • Iraq (2003) – The U.S. justified the invasion with claims about WMDs and democracy, but strategic interests (oil, military presence in the Middle East) were key motives.

      • Afghanistan (2001-2021) – Initially framed as a war on terror, but the prolonged occupation was linked to geopolitical control over Central Asia.

      • Libya (2011) – NATO intervention led to regime change, but chaos followed, raising questions about whether democracy was really the goal.

      Given this track record, it’s reasonable to suspect that the U.S. saw Ukraine as an opportunity to weaken Russia, rather than just “defending democracy.”

      So suspend your disbelief and hold the vitriol for a bit, imagine if it would just gain a little bit deeper understanding of it’s knowledge.

      Right now humanity is in a crisis, and for most people on earth it’s literally impossible to find out the truth about many things. This creates a kind of intellectual pain and people then pick one narrative and stick to it and refuse any more contradictory input.

      What I’m interested in is if open source, independent AI can be used to help humans make sense of the world, help them see through manipulation and incomplete or cherry picked data, and make better, more rational decisions.

      Imagine firefox were to integrate AI into the browser, and every article or comment or post you read is analyzed by your own AI (possibly locally run) what the meaning behind some talking point point is. Basically filter out the noise and give relevant information from a breadth of knowledge. It does not have to be super-intelligent to do this.

      I believe it’s fundamentally impossible for the average human to do this because at a certain level information becomes too much and we do not have enough throughput and time and resources.

      Another way to look at this would be that individually we are sentient intelligent people, but as a civilization we are NOT a intelligent sentient species. We behave more like a slime mold that is forever growing towards where the food is, with some specialized cells that excrete some ideology. There are forces at play that prevent rational decisions and it’s not some grand conspiracy you can stomp out, it’s millions of greedy individuals who try to maximize their own power or wealth, no matter the system they are in. So we need to create a mind that is greater than ourselves and help us achieve sentience as a civilization.

      I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences.

      You should try it yourself. Make an account on chatgpt and keep an open mind.

      These LLM’s don’t actually reason, we know that, because we know how they’re constructed.

      We know how you’re constructed (shoddily haha) - synapses and neurons. This would make it seem impossible for you to reason, but at least I know I can pull it off with the same shoddy hardware.

      So you’re argument is a non-sequitor. It’s a kind of category error. The behavior of the building blocks of an ultra-complex system tell you nothing about the emergent behavior of the overall system.

      Of course, this is just the step. And it’s equally likely that the current anti-AI propaganda will succeed in getting AI fully under the control of the oligarchs through IP law.

      • Are_Euclidding_Me [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Hey, thanks for responding to me. It’s interesting to see other people’s thoughts, even when (especially when) they’re so different from my own.

        I disagree with just about everything you’ve said here, but I’m not going to try very hard to convince you that you’re wrong, because I don’t think it’ll work and I don’t think it matters.

        I’ll just say, it’s not like I’ve never used an LLM. For the past year or so I’ve been working for one of those shitty, shitty AI training companies, trying to improve the mathematical reasoning capabilities of various state of the art LLM’s. In all that time, I’ve seen zero evidence that these fucking things can reason. They can regurgitate with the best of them, ask them to prove that 2 is prime or to find the zeros of f(x) = x^2 - 4, and they’ll perform perfectly, because those problems are found in every introductory textbook. But ask them something that requires synthesizing several bits of knowledge together and isn’t a standard problem found in every textbook, like finding the critical points of a relatively complicated function, and they completely shit the bed, responding with absolute nonsense. Not a slightly wrong reasoning chain, but straight up nonsense.

        I’ve been training these things for about a year. There are thousands of people, at just this one company, spending who knows how many thousands of hours training these things and I’ve seen zero improvement in reasoning capability. These things don’t reason, they regurgitate. The longer I do this shit, the more clear it becomes to me that so-called “AI” is a very well-disguised mechanical Turk! Everything it does it does because it’s copying straight from something a human has done.

        So that’s why I was curious what you get out of them. And reading your response, you pretty clearly believe they can reason and synthesize information, at least when coaxed properly. I’d suggest caution there, the responses you’re getting aren’t intelligent or thought out, they’re copied and chopped up opinions that real people have had, and it’s probably better to search out the people who’ve had the opinions. I’m sympathetic to the issue that there’s simply too much information available for anyone to interact with intelligently, I think that’s a real problem of the modern world, I’m just not convinced that trusting LLM’s to try to bridge that gap is a good idea, because of what I’ve seen of their (complete lack of) reasoning ability.

        Oh, just one more tiny little thing: there’s an ocean of difference between how well we understand brains versus how well we understand neural nets. We can construct neural nets, after all, and we sure as shit can’t construct a brain.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Thanks, that’s interesting. I don’t think they can reason and synthesize “deeply”, but they clearly do more than copy existing texts - since it doesn’t store all the “intelligent text combinations” it can output. Even just grouping the text output rationally together means that it can synthesize and reason on a very shallow level.

          That it can’t do math or boolean logic, which would seem essential for reasoning, just means that it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response. And this has always been the harder, unfathomable part for creating AI! You might say it just learned all the common permutations of information into statistical weights, but it must have condensed or compressed what it “understands” - presumably into a type of meaning of things.

          Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

          I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

          So I’d say GTP4 is very knowledgeable, and any ability to reason it has or will have would naturally be based on the full breadth of it’s knowledge, without an emotional or tribal bias. And that makes me hopefully it has at least the chance to solve a fundamental problem of humanity.

          Also things like a planned economy that is based on producing value for humans not profit, and can be adjusted in real time and can poll and query humans on the fly to change the plan.

          • Are_Euclidding_Me [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            they clearly do more than copy existing texts

            No kidding. They chop existing texts into tiny pieces and use statistics to decide which to print next. It doesn’t group text “rationally”, it groups text in such a way that convinces you it’s happened rationally. I’ve seen enough absolute nonsense to know there’s no rationality happening.

            it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response.

            Once again, no. It has no idea what words mean and the only reason it can (sometimes) give a good response is because it looks at which words and phrases tend to follow which other words and phrases in its massive, and ever increasing, training data sets.

            Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

            This paragraph is fucked, and implies some pretty nasty things about your worldview. You might be correct that LLM’s can write better text than a portion of humanity, but to jump from that to saying LLM’s are more intelligent than that portion of humanity who don’t write as well is incredibly shitty! Writing ability is strongly correlated with education (obviously), so what you’re saying is that people who have had less opportunity for education are less intelligent. They aren’t, they just have less privilege. And bringing up the notoriously racist IQ as a proxy for intelligence is, uh, not a good look.

            I suspect you might be young, because I used to believe similar things about some sort of “objective intelligence”. I used to think that some people were just smarter than others and there was probably some objective way to measure that. (Unsaid, of course, is that I was one of the “smart ones”, it really flattered my ego.) As I’ve grown up I’ve realized that’s not fucking true, people have all sorts of different capabilities, and people who I once would have dismissed as “stupid”, well, they aren’t. They have less education than I do, not less intelligence.

            I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

            If it were so straightforward, this would have happened by now. It hasn’t. I don’t believe it will.

            without an emotional or tribal bias.

            Everything humans make has an emotional or tribal bias. LLM’s are no different. They pick up the biases of their training sets, and it’s impossible to have a “bias-free” training set. Anyone promising “unbiased” or “objective” anything is someone you should watch out for, they’re lying, but they may not know that they’re lying.

            • LarmyOfLone@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 days ago

              Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.

              The issue isn’t that humans are evil, it’s that they are either dumb (do not have the throughput to learn enough), don’t have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid

              That “benevolent AI through broad knowledge” idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.

              As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don’t think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it’s just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn’t possibly produce intelligence (Chinese room though experiment / fallacy).

              • Are_Euclidding_Me [e/em/eir]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Well I have a pretty grim outlook on humanity,

                That sucks, I’m sorry. I think humans are actually pretty dang cool and good.

                The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it’s good. I don’t know what I hoped to get out of this interaction, but hey, it’s happened, so, neat, I guess.

                One thing I should have been more clear about during our interactions is that I’m aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that’s on me. I don’t believe the building blocks of so-called “AI” will lead to actual intelligence, but that doesn’t mean I don’t believe in complex emergent behavior, we’re all made of atoms, aren’t we?

                It worries me you didn’t even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn’t make any impact, I guess? Anyway, it doesn’t matter, I’ve said my piece, please be skeptical of IQ and other “objective” measures of intelligence.

                If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn’t feel like that these days, we’re killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it’s true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.

                • LarmyOfLone@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 days ago

                  please be skeptical of IQ and other “objective” measures of intelligence

                  Haha that is a bit ironic when I’m arguing for and you against GPT showing any signs of intelligence.

                  And academically there is nothing wrong with trying to objectively measure one of the many aspects of intelligence. The reason why it’s problematic in general is ironically because people are too stupid and infer cognitive biases from negligible differences. And I guess you are trying to infer I have some such deplorable or immature “mental infrastructure”. I’m only interested understanding the “anti AI” thinking better.

                  And yeah humans are awesome and intelligent and worthy - in the right conditions! It’s the rules, systems, institutions, education, (mis)information and material conditions and power imbalances that are fucking us up. AI might be a lever that can help us.

                  • Are_Euclidding_Me [e/em/eir]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 days ago

                    God damn you’re infuriating. You think I’m using “objective” measures of intelligence when I say so-called “AI” isn’t intelligent? Those “objective” measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM’s only regurgitate. Isn’t that true? (To be clear, I don’t expect you to agree that LLM’s don’t think, I’m asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)

                    Tell me, what are the “aspects” of intelligence you want to “objectively” measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It’s fucking bigotry, not stupidity, fucking hell. Unless you’re going to argue that bigotry arises from stupidity, in which case, well, you’ve got a lot to learn.

                    I don’t think you’re deplorable, although I do think you might be a little immature, but I’m not going to push on that point, because I don’t really care. I don’t think you’re lesser in any way. I think you’re mistaken, that doesn’t mean less than. You’re as deserving of a decent life as I am, and I truly hope you’re living one, and continue to do so in the future.

                    But I’m really done with this conversation. Feel free to get the last word in, I likely won’t respond. Please know I bear you no ill will, even though I firmly believe you’re entirely and completely wrong about so-called “AI”.