• edinbruh@feddit.it
    link
    fedilink
    English
    arrow-up
    30
    ·
    21 hours ago

    If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.

    Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      11 hours ago

      Humans are able to do it but it takes us weeks instead of seconds.

      Many, many tasks that would have taken hours or days to learn are just instant now. I dont know why people dont appreciate that technology. Is it because its sometimes wrong? Even with the time spent fixing errors, its many many times faster than doing the task manually.

      Maybe the difference in opinions is because people talk about very different tasks and the llm just sucks at some of them, while being excellent at others.

      • edinbruh@feddit.it
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 hours ago

        I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.

        LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.

        We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.

        As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.

        • 1984@lemmy.today
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          11 hours ago

          Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.

          I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.

          But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.

    • UnrepentantAlgebra@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      Where did he say that about compilers and high level languages? He died before Fortran was released and probably programmed on punch cards or tape.

      • edinbruh@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 hours ago

        I’ll try to find it later, I read he said that in a book from Martin Davis. He didn’t speak about Fortran, I just used it as an analogy

      • edinbruh@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Neural networks don’t simulate a brain, it’s a misconception caused by their name. They have nothing to do with brain neurons

        • fermuch@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          Not what I meant. What I mean is: this could be the path he would go for, since his desire was to make a stimulated person (AI).

          • edinbruh@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 hour ago

            LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.

            If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.

  • hefty4871@lemmy.ca
    link
    fedilink
    arrow-up
    26
    ·
    21 hours ago

    I wish Alan Turing was still alive and using Grindr. It would probably motivate him to update his test just to wade through the 80% of bot profiles on the app!

  • Johandea@feddit.nu
    link
    fedilink
    arrow-up
    66
    ·
    1 day ago

    No one who’s actually used Grindr likes that app. It sucks, and not in a good way. It fills its niche, but it is a horrible app and gets worse and worse with every update.

    Rant over.

      • andros_rex@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        1 day ago

        Every update decreases the amount of people you can see. Frequent, full page ads that cannot be closed out and will open up a webpage/the App Store as you try to hit the “x.” You’ll “accidentally” hit the $99.99 monthly purchase somehow because everything moves around after your conversations load and have to exit out of the confirmation for that. There are a ridiculous amount of Only Fans and bots. Often the app would rather connect you to people hundreds of miles away rather than the people in your immediate area.

  • python@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    23 hours ago

    Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      9
      ·
      20 hours ago

      There’s no question. Chatbots are implicated in a lot of suicides, shattering the first rule.

      There could be an interesting conversation about whether the environmental impact ALSO breaks the first rule, but that conversation is unnecessary when chat bots are telling kids to kill themselves.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        19 hours ago

        yes it remains to be seen if chatbots are ever capable of obeying any of the laws.

        It doesn’t and cant obey all orders, it doesn’t and can’t protect humans, it doesn’t and can’t protect its own existence and it doesn’t or can’t prevent humanity from coming to harm.

      • bus_factor@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        19 hours ago

        Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?

        • Does following the 3 laws of robotics increase profits?

          Depends on the product. A maid bot? Yes. An automated turret? No.

          Does ignoring them increase profits?

          See previous answer, and reverse it.

          Are tech bros empty husks without a shred of shame or empathy?

          Yes.

          Is this too many rhetorical questions in a row?

          Perhaps.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      21 hours ago

      In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.

  • blave@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    1 day ago

    When people ask me what the difference between Reddit and Lemmy is, I can just show them this post

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    Excited to hear New Labour’s position on chemically castrating one of the greatest scientists in history. Perhaps we can get some Guardian Op-Eds explaining why it is both necessary and good to drive the nation’s finest minds to suicide with constant verbal and physical abuse.

  • khepri@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    21 hours ago

    Not after 1952 he wasn’t. { Incredibles ‘those who know’ meme goes here }

    • YiddishMcSquidish@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Thanks for helping us with one of the biggest cryptographic problems giving us a strategic edge in this WORLD FUCKING WAR, now we’re guns chemically castrate you. K bye

    • vodka@feddit.org
      link
      fedilink
      arrow-up
      21
      ·
      1 day ago

      Even the first version of ChatGPT passed turing tests.

      It takes surprisingly little for an LLM to make natural language responses that are indistinguishable from a human. Especially when factual accuracy was never part of it.