porky-point

  • KnilAdlez [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    89
    ·
    2 days ago

    I saw on Reddit that chatGPT 5 will hallucinate before actually searching the web or opening documents, most likely as a cost saving procedure by OpenAI. The bubble is looking awfully shaky.

    • The_hypnic_jerk [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      55
      ·
      edit-2
      2 days ago

      I did some training for way too much money for open AI as a pro contractor for my field this past year. And let me tell you, it’s a long ways off if even possible for even basic report writing much less something even a bit more complicated.

      These things cannot be trusted with technical work, it would just find like things that “sound” like it makes sense, but then if you know anything at all about the work it is laughable

      If you wouldn’t outsource technical work to a reddit forum you definitely shouldn’t be giving it to the robot

      • fox [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        25
        ·
        2 days ago

        All they do is hallucinate, it’s just a coin flip on if it’s total nonsense or if it’s truth-shaped. The same process that makes it answer wrong is the one that makes it answer right

        • invalidusernamelol [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Yep, I work in a moderately neiche programming sector and it was truly awful when I tried to do the “co-programming” stuff. It got to a point where if give it a clear spec, and all is get back was “call the function that does what you asked for”

          • PolarKraken@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Slight improvement over telling you to call functions it just silently made up (my experience using it with something niche)

            See, they’re learning, the hype is real! Any day now they will expertly clue you in to when they don’t know shit. After that, AGI can only be 12-18 months away!

            • invalidusernamelol [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 day ago

              Oh that’s what I meant when I said it told me to “call the function that does what I want”. It would just hallucinate that function, then I’d go write it, then it would hallucinate more stuff. And by the time I was done the whole program was nonsense.

              Ended up being faster at getting stuff done by just fully dropping it. Sure I don’t have super auto complete, but who cares. Now my program is structured by me, and all the decisions were mine meaning I actually kinda understand how it works.

              • PolarKraken@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                23 hours ago

                Lol, oof, sounds like real “draw the rest of the owl” energy but adding an unhelpful “unfuck the owl I drew” step first.

                • invalidusernamelol [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  19 hours ago

                  Yep, whole process was a pain. I can’t imagine having to lead a team where people are using AI assistants. That has to be a nightmare and I’d ban it instantly. It was hard enough parsing the hallucinations it had introduced from my prompts. Would be 1000x worse doing a code review where you have to find hallucinations introduced by other people’s prompts.