• shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    19
    ·
    1 day ago

    Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we’re really not much smarter. “But LLMs tell us what we want to hear!” Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 hours ago

      It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.

      Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.

      An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).

      An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      1 day ago

      There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      1 day ago

      Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.