• Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

      People really really don’t understand how these things work…

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        The people who make them don’t really understand how they work either. They know how to train them and how the software works, but they don’t really know how it comes up with the answers it comes up with. They just do a ron of trial and error. Correlation is all they really have. Which of course is how a lot of medical science works too. So they have good company.