• python@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    23 hours ago

    Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      9
      ·
      19 hours ago

      There’s no question. Chatbots are implicated in a lot of suicides, shattering the first rule.

      There could be an interesting conversation about whether the environmental impact ALSO breaks the first rule, but that conversation is unnecessary when chat bots are telling kids to kill themselves.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        19 hours ago

        yes it remains to be seen if chatbots are ever capable of obeying any of the laws.

        It doesn’t and cant obey all orders, it doesn’t and can’t protect humans, it doesn’t and can’t protect its own existence and it doesn’t or can’t prevent humanity from coming to harm.

      • bus_factor@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        19 hours ago

        Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?

        • Does following the 3 laws of robotics increase profits?

          Depends on the product. A maid bot? Yes. An automated turret? No.

          Does ignoring them increase profits?

          See previous answer, and reverse it.

          Are tech bros empty husks without a shred of shame or empathy?

          Yes.

          Is this too many rhetorical questions in a row?

          Perhaps.

    • khepri@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      21 hours ago

      In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.