• Ech@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 个月前

    That’s the crux of the issue - it’s not AI. They’re not “sure” of anything. They don’t know anything. That’s why they can’t be modified to look like they do.

    “Hallucinating” is what LLMs were built to do. At their very core that’s what they still do and, without a ground-up redesign, that’s what they’ll do forever.