• Lodespawn@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    Nah so their definition is the classical “how confident are you that you got the answer right”. If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people’s tends to decrease to mitigate the over confidence.

    But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it’s a word prediction model based on a data set of written text tending to infinity. It’s not assessing validity of results, it’s predicting what the answer is based on all previous inputs. The whole study is irrelevant.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human’s are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

      So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there’s a great deal of the reality and investment that wants to treat it as “person equivalent” output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered “weird”.