• Benedict_Espinosa@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    It’s not about banning or refusing AI tools, it’s about making them as safe as possible and regulating their usage.

    Your argument is the equivalent of “guns don’t kill people” or blaming drivers for Tesla’s so-called “full self-driving” errors leading to accidents, because “full self-driving” switches itself off right before the accident, leaving the driver responsible as the one who should have paid more attention, even if there was no time left for him to react.

    • womjunru@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      So what kind of regulations would be put in place to prevent people from using ai to feed their mania?

      I’m open to the idea, but I think it’s such a broad concept at this point that implementation and regulation would be impossible.

      If you want to go down the guns don’t kill people assumption, fine: social media kills more people and does more damage and should be shut down long before AI. 🤷‍♂️

      • Benedict_Espinosa@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Probably the same kind of guardrails that they already have - teaching LLMs to recognise patterns of potentially harmful behaviour. There’s nothing impossible in that. Shutting LLMs down altogether is a straw man and extreme example fallacy, when the discussion is about regulation and guardrails.

        Discussing the damage LLMs do does not, of course, in any way negate the damage that social media does. These are two different conversations. In the case of social media there’s probably government regulation needed, as it’s clear by now that the companies won’t regulate themselves.

        • womjunru@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          Okay so it has guardrails already. Make them better. Government regulations can’t be specific enough for the daily changing AI environment.

          I’d say AI has a lot more self regulation than social media.

          But, I run ai on bare metal at home. This isn’t chatGPT. And it will, in theory, do anything I want it to. Would you tell me that I can’t roll my own mania machine? Get out of my house lol.

          • Benedict_Espinosa@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            Naturally the guardrails cannot cover absolutely every possible specific use case, but they can cover most of the known potentially harmful scenarios under the normal, most common circumstances. If the companies won’t do it themselves, then legislation can push them to do it, for example making them liable, if their LLM does something harmful. Regulating AI is not anti-AI.

            • womjunru@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              1
              ·
              24 hours ago

              I feel the guardrails are in place, and that they will be continuously improved. If a person finds a situation where an AI suggests they kill themselves without being prompted, say, during a brainstorm about strawberry cake consistency—if you were dead you wouldn’t have this problem—would be… concerning.