We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    But I believe that if AIs are passing the Turing test, we need to update the test.

    Uhh that’s kind of not how tests are supposed to work. If you want non-falsifiable conviction in human specialness, maybe try religion instead.

  • DeuxChevaux@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    12 hours ago

    So why is a real “thinking” AI likely impossible? Because it’s bodiless.

    Why would anything need a body to be intelligent? Just because we have bodies and whoever said that cannot imagine different forms of life/intelligence? Not that i think, current LLMs have the experience and creativity to be called intelligent. i just don’t think that everything that’s intelligent needs an arse :)

  • Ceedoestrees@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    The title is accurate, but the article doesn’t really provide explanations beyond personal anecdotes. The few quotes and concepts are gestured to, rather than used to build an argument.

    The comparison to greenhouse gas warnings came out of left field since they didn’t bring up any direct relationship between the two subjects.

    It reads like they expect readers to agree with them.

    Any argument about AI and consciousness should point out the difference between “true” AI and the LLMs we’re calling AI, and how they work.

    https://hackaday.com/2024/05/15/how-ai-large-language-models-work-explained-without-math/

    Here’s more information on AI and consciousness:

    https://pmc.ncbi.nlm.nih.gov/articles/PMC9582153/

  • dogerwaul@pawb.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    15 hours ago

    we need to stop calling it AI, first of all. it isn’t intelligent. these are large language models. adopt this phrase and refer to them as LLM bots or something. stop with this AI misnomer.

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      We had AI before LLMs and that was even dumber. AI is fine as a name if people stop equating it with human intelligence.

    • Yorick@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 hours ago

      Mass Effect really had the perfect name for LLM and alike: Virtual Intelligence

      Wish it’d catch on more than AI

      • Ragnor@feddit.dk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 minutes ago

        LLM’s aren’t intelligent. “Intelligence” is the word that has to be cut. “Artificial” is accurate.

        They cannot reason about things they haven’t seen before, for instance. Ask it about “your new physics theory” and it will tell you that it is interesting and could revolutionize the world, basically regardless of how ridiculous and nonsensical it is.

        That is because when new ideas do make it to the news and gets significant coverage, it is because it is an idea that has potential for actually being revolutionary. Since those ideas take up most of the space, it is a majority of what the LLM is trained on. That means that the basic response to any claimed physics idea is that it is great. Posts constantly show up on physics subreddits that prove this trend. These theories that show up never have math that makes sense, and make claims that doesn’t correlate with the data we already have about our universe.

  • deadcatbounce@reddthat.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    13 hours ago

    What things are and what the masses choose to call them, and use them for, are usually two different things.

    Asking the masses to understand a complex subject for themselves, and ascribe to it appropriate nomenclature, when all they actually want is something that echoes what they already think with more eloquence is folly.

    Reference: social media - a method to collect personal data from the masses to use against them, which they willingly and greedily supply, without recompense.

  • JoShmoe@ani.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    15 hours ago

    AI is currently in developmental hell. There is no visible endpoint, its developers are being shortchanged, and a lawsuit(s) will neuter and decapitate its current primary use case which is a new form of piracy in disguise. Your mom.