• ANarcoSnowPlow [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    LLMs and various other forms of machine learning have been around for a long time, these models are doing the actual work advancing science and understanding.

    Chatgpt et al are advancing the field of taking unverified information as expertly sourced and true without any evidence.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 days ago

      LLMs and various other forms of machine learning have been around for a long time

      I think this is a kind of category error. If you look at water molecules on a quantum level, you can find models to predict how they will react, and if you look at them with a chemical theory you can predict how they react. But if you then change the scale you suddenly get waves on the ocean and hydrodynamics which have completely different emergent behaviors and require new models and explanations.

      While LLMs have been around a long time, since GPT-3 or so the quantity of data and learning increased that created a new quality. Similar to how the functioning of a synapse can be understood modeled, it does not explain intelligent thinking or a theory of consciousness (Not saying GPT is conscious).

      It did come at a great shock that suddenly just through increase of computing power they exhibit intelligence, creative writing, humor and then creativity in creating imagery. Obviously it makes errors too and has limitations.

      I suspect the part of the backlash against AI, especially the irrational part, is driven by a kind of “wounded ego” about the supremacy of humans and what we can do and what defines us.

      Of course there is also a rational backlash against techbros and idiot managers, and economically driven propaganda like the copyright stuff. But I’m pretty sure this will end with a few capitalist conglomerates owning the rights to the training data and to the models derived from it. And it will become illegal to use without paying some capitalist for it. Which is the worst possible outcome.

      • ANarcoSnowPlow [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        They don’t actually exhibit these characteristics. They simulate them by stringing the proper words together in sequence. There is no understanding or deeper capability and analysis. There’s no actual intelligence.

        As a translation utility it’s quite powerful, but anything outside of that extremely narrow space is only “shaped” like a real response, there’s no underlying rationale other than statistical analysis of word frequency.

        This doesn’t magically change with a large enough scale applied, it only takes on conversational meta-patterns. This fools non-experts in specific categories into trusting the “analysis” it provides, even though it is incapable of providing coherent analysis.