• kureta@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 hours ago

    People should understand that words like “unaware” or “overconfident” are not even applicable to these pieces of software. We might build intelligent machines in the future but if you know how these large language models work, it is obvious that it doesn’t even make sense to talk about the awareness, intelligence, or confidence of such systems.

    • turmacar@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      I find it so incredibly frustrating that we’ve gotten to the point where the “marketing guys” are not only in charge, but are believed without question, that what they say is true until proven otherwise.

      “AI” becoming the colloquial term for LLMs and them being treated as a flawed intelligence instead of interesting generative constructs is purely in service of people selling them as such. And it’s maddening. Because they’re worthless for that purpose.

  • Baggie@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    13 hours ago

    Oh god I just figured it out.

    It was never they are good at their tasks, faster, or more money efficient.

    They are just confident to stupid people.

    Christ, it’s exactly the same failing upwards that produced the c suite. They’ve just automated the process.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      12 hours ago

      Oh good, so that means we can just replace the C-suite with LLMs then, right? Right?

      An AI won’t need a Golden Parachute when they inevitably fuck it all up.

  • jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    19 hours ago

    They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the ‘uncanny valley’ of the mistakes are all the more striking, but they are just generating content without concept of ‘mistake’ or’ ‘success’ or the content being a model for something else and not just being a blend of stuff from the training data.

    For example:

    Me: Generate an image of a frog on a lilypad.
    LLM: I’ll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.

    <includes a perfectly credible picture of a frog on a lilypad, request successfully processed>

    Me (lying): That seems to have produced a frog under a lilypad instead of on top.
    LLM: Thanks for pointing that out! I’m generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.

    <includes another perfectly credible picture>

    It didn’t know anything about the picture, it just took the input at it’s word. A human would have stopped to say “uhh… what do you mean, the lilypad is on water and frog is on top of that?” Or if the human were really trying to just do the request without clarification, they might have tried to think “maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?”. A human wouldn’t have gone “you are right, I made a mistake, here I’ve tried again” and include almost the exact same thing.

    But tha training data isn’t predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      14 hours ago

      The use of language like “unaware” when people are discussing LLMs drives me crazy. LLMs aren’t “aware” of anything. They do not have a capacity for awareness in the first place.

      People need to stop taking about them using terms that imply thought or consciousness, because it subtly feeds into the idea that they are capable of such.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        10 hours ago

        Okay fine, the LLM does not take into account in the context of its prompt that yada yada. Happy now word police, or do I need to pay a fine too? The real problem is people are replacing their brains with chatbots owned by the rich so soon their thoughts and by extension the truth will be owned by the rich, but go off pat yourself on the back because you preserved your holy sentience spook for another day.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    1
    ·
    1 day ago

    Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      19
      ·
      1 day ago

      Neither are our brains.

      “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

      ― Peter Watts, Blindsight (fiction)

      Starting to think we’re really not much smarter. “But LLMs tell us what we want to hear!” Been on FaceBook lately, or lemmy?

      If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.

        Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.

        An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).

        An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 day ago

        There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 day ago

        Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

      • greygore@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 hours ago

        I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:

        The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.

        As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.

        He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.

        At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:

        • Are they genuinely intelligent?
        • Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?

        The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?

        At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.

        Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.

        We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.

        Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.

        Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:

        1. Collect underpants
        2. ?
        3. Profit!

        It looks more like:

        1. Use neural network training to construct large language models.
        2. ?
        3. Artificial general intelligence!

        If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        edit-2
        13 hours ago

        Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he’s being disingenuous about his familiarity with it.

        For one, I think he is dismissing holotes, the concept of “wholeness.” That when you cut something apart to it’s individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it’s potentially missing aspects of human/animal biologically based understanding.

        For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn’t know why. Surprisingly, that’s actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn’t done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.

        Further, I think he’s deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.

        Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.

        For what it’s worth, I didn’t downvote you and I’m sorry people are doing so.

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        9
        ·
        1 day ago

        People really do not like seeing opposing viewpoints, eh? There’s disagreeing, and then there’s downvoting to oblivion without even engaging in a discussion, haha.

        Even if they’re probably right, in such murky uncertain waters where we’re not experts, one should have at least a little open mind, or live and let live.

        • THB@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          2
          ·
          edit-2
          1 day ago

          It’s like talking with someone who thinks the Earth is flat. There isn’t anything to discuss. They’re objectively wrong.

          Humans like to anthropomorphize everything. It’s why you can see a face on a car’s front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let’s see how it works out for the world, I guess.

          • saimen@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            19 hours ago

            I think so too, but I am really curious what will happen when we give them “bodies” with sensors so they can explore the world and make individual “experiences”. I could imagine they would act much more human after a while and might even develop some kind of sentience.

            Of course they would also need some kind of memory and self-actualization processes.

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              18 hours ago

              Interaction with the physical world isn’t really required for us to evaluate how they deal with ‘experiences’. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.

              One key thing is they don’t bother until direction tells them. They don’t have any desire they just have “generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user”.

              LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.

        • fodor@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 day ago

          I think there’s two basic mistakes that you made. First, you think that we aren’t experts, but it’s definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can’t easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it’s not particularly productive to engage in discussion on it here because there’s too much background information that’s missing. It’s often the case that experts don’t try to discuss things because it’s the wrong venue, not because they feel superior.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 hours ago

    If you don’t know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become “Adequate Intelligence”.

    • MonkderVierte@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      18 hours ago

      That definition seems a bit shaky. Trump & co. are mentally ill but they do have a minimum of intelligence.

    • jol@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      15 hours ago

      As any modern computer system, LLMs are much better and smarter than us at certain tasks while terrible at others. You could say that having good memory and communication skills is part of what defines an intelligent person. Not everyone has those abilities, but LLMs do.

      My point is, there’s nothing useful coming out of the arguments over the semantics of the word “intelligence”.

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 day ago

    It’s easy, just ask the AI “are you sure”? Until it stops changing it’s answer.

    But seriously, LLMs are just advanced autocomplete.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt “Do not hallucinate content and verify all data is actually correct.”.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          18 hours ago

          It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

          People really really don’t understand how these things work…

          • Modern_medicine_isnt@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            The people who make them don’t really understand how they work either. They know how to train them and how the software works, but they don’t really know how it comes up with the answers it comes up with. They just do a ron of trial and error. Correlation is all they really have. Which of course is how a lot of medical science works too. So they have good company.

    • Lfrith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 day ago

      They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and “correct” itself to give the wrong answer.

      • GissaMittJobb@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 day ago

        Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You’re likely to see a lot more success using this approach.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          19 hours ago

          Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.

          Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever ‘seeing’ the image, but it assumes the image is good because that’s consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never ‘looked’ at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.

      • saimen@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.

  • rc__buggy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 day ago

    However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations

    This is what everyone with a fucking clue has been saying for the past 5, 6? years these stupid fucking chatbots have been around.

  • Lodespawn@aussie.zone
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Why is a researcher with a PhD in social sciences researching the accuracy confidence of predictive text, how has this person gotten to where they are without being able to understand that LLMs don’t think? Surely that came up when he started even considering this brainfart of a research project?

      • Lodespawn@aussie.zone
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        I guess, but it’s like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.

        • FanciestPants@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I work in risk management, but don’t really have a strong understanding of LLM mechanics. “Confidence” is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I’m not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn’t model any other possible outcomes other than the answer it is providing, which does seem troubling.

          • Lodespawn@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Nah so their definition is the classical “how confident are you that you got the answer right”. If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people’s tends to decrease to mitigate the over confidence.

            But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it’s a word prediction model based on a data set of written text tending to infinity. It’s not assessing validity of results, it’s predicting what the answer is based on all previous inputs. The whole study is irrelevant.

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              18 hours ago

              Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human’s are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

              So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there’s a great deal of the reality and investment that wants to treat it as “person equivalent” output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered “weird”.

  • RoadTrain@lemdro.id
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    17 hours ago

    About halfway through the article they quote a paper from 2023:

    Similarly, another study from 2023 found LLMs “hallucinated,” or produced incorrect information, in 69 to 88 percent of legal queries.

    The LLM space has been changing very quickly over the past few years. Yes, LLMs today still “hallucinate”, but you’re not doing anyone a service by reporting in 2025 the state of the field over 2 years before.