All they do is hallucinate, it’s just a coin flip on if it’s total nonsense or if it’s truth-shaped. The same process that makes it answer wrong is the one that makes it answer right
Yep, I work in a moderately neiche programming sector and it was truly awful when I tried to do the “co-programming” stuff. It got to a point where if give it a clear spec, and all is get back was “call the function that does what you asked for”
Slight improvement over telling you to call functions it just silently made up (my experience using it with something niche)
See, they’re learning, the hype is real! Any day now they will expertly clue you in to when they don’t know shit. After that, AGI can only be 12-18 months away!
Oh that’s what I meant when I said it told me to “call the function that does what I want”. It would just hallucinate that function, then I’d go write it, then it would hallucinate more stuff. And by the time I was done the whole program was nonsense.
Ended up being faster at getting stuff done by just fully dropping it. Sure I don’t have super auto complete, but who cares. Now my program is structured by me, and all the decisions were mine meaning I actually kinda understand how it works.
Yep, whole process was a pain. I can’t imagine having to lead a team where people are using AI assistants. That has to be a nightmare and I’d ban it instantly. It was hard enough parsing the hallucinations it had introduced from my prompts. Would be 1000x worse doing a code review where you have to find hallucinations introduced by other people’s prompts.
Only if you pay for premium or whatever. Then it takes extra time to “think” and you’ll get a more accurate answer. But “hallucinations” can only be reduced, never eliminated.
I thought that version was supposed to reduce hallucinations?
Hopefully it’s reducing the hallucinations of future profits that investors have been clinging onto
All they do is hallucinate, it’s just a coin flip on if it’s total nonsense or if it’s truth-shaped. The same process that makes it answer wrong is the one that makes it answer right
Yep, I work in a moderately neiche programming sector and it was truly awful when I tried to do the “co-programming” stuff. It got to a point where if give it a clear spec, and all is get back was “call the function that does what you asked for”
Slight improvement over telling you to call functions it just silently made up (my experience using it with something niche)
See, they’re learning, the hype is real! Any day now they will expertly clue you in to when they don’t know shit. After that, AGI can only be 12-18 months away!
Oh that’s what I meant when I said it told me to “call the function that does what I want”. It would just hallucinate that function, then I’d go write it, then it would hallucinate more stuff. And by the time I was done the whole program was nonsense.
Ended up being faster at getting stuff done by just fully dropping it. Sure I don’t have super auto complete, but who cares. Now my program is structured by me, and all the decisions were mine meaning I actually kinda understand how it works.
Lol, oof, sounds like real “draw the rest of the owl” energy but adding an unhelpful “unfuck the owl I drew” step first.
Yep, whole process was a pain. I can’t imagine having to lead a team where people are using AI assistants. That has to be a nightmare and I’d ban it instantly. It was hard enough parsing the hallucinations it had introduced from my prompts. Would be 1000x worse doing a code review where you have to find hallucinations introduced by other people’s prompts.
Only if you pay for premium or whatever. Then it takes extra time to “think” and you’ll get a more accurate answer. But “hallucinations” can only be reduced, never eliminated.
It would need to be trained on the data to have a chance of not hallucinating, which would not be possible on new webpages or custom documents.
I too always trust marketing departments!