• Lodespawn@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I guess, but it’s like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.

      • FanciestPants@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I work in risk management, but don’t really have a strong understanding of LLM mechanics. “Confidence” is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I’m not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn’t model any other possible outcomes other than the answer it is providing, which does seem troubling.

        • Lodespawn@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Nah so their definition is the classical “how confident are you that you got the answer right”. If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people’s tends to decrease to mitigate the over confidence.

          But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it’s a word prediction model based on a data set of written text tending to infinity. It’s not assessing validity of results, it’s predicting what the answer is based on all previous inputs. The whole study is irrelevant.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human’s are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

            So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there’s a great deal of the reality and investment that wants to treat it as “person equivalent” output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered “weird”.