• 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    28 days ago

    An LLM can summarize the rules of chess, because it predicts the sequence of words needed to create that with incredible accuracy. This is why it’s so weird when it goes wrong, because if one part of it is off then it throws the rest of the work it’s doing out of balance.

    But all it is doing is a statistical analysis of all the writing it’s has been trained on and determining the best next word to use (some later models do them in groups and out of order).

    That doesn’t tell it fuck-all about how to make a chess move. It’s not ingesting information in a way that lets it create a model to tell you what the next best chess move is, how to solve linear algebra, or any other activity that requires procedural thought.

    It’s just a chatterbox that tells you whatever you want to hear. No wonder the chuds love it