• acargitz@lemmy.ca
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    1 day ago

    AI doesn’t have agency, personhood.

    It predicts that the next chunk of tokens its trainer expects to see is something like so and so.

    If we have AI that predicts chunks of tokens that we understand as meaning that human life is disposable, that says something about us, the trainers, and the shapers.

    Similarly, it says something about the people who would be willing to go with what the AI predicts are the expected completions.

    Basically Eichmann with extra steps.

    • RiverRabbits@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      “us, the trainers” is a bit of a misnomer, if the training is done mostly by silicon valley cultists like Sam Altman and his ilk, who have shown that they do not understand reality.

      • acargitz@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        24 hours ago

        Grammatical ambiguity!

        I meant it as an actual list:

        • us: we generated the content of the internet, the books, etc. I do mean all of us, as the creators of the cultural landscape from which training data was drawn.
        • the trainers: these are the people who made choices to curate the training sets.
        • the shapers: these are the people like Altman who hire the trainers and shape what the AIs are for

        So there is a progression here: the shapers hire the trainers who choose what to train on from the content that we created.

        • RiverRabbits@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Oh, sorry! I thought this was like “Mike Tyson, the boxer,…” - an embedded sentence expaining something in more detail! The actual meaning you meant to convey is much more fitting :)

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 day ago

    In a recent development in the AI world, a company known as Anthropic . . .

    There it is. If there’s a shocking headline about a “study” like this, it’s almost always Anthropic. They don’t exactly have a good peer review strategy. They toss up text on their web site and call it a whitepaper.

  • AstralPath@lemmy.ca
    link
    fedilink
    arrow-up
    27
    ·
    2 days ago

    Because that’s how we’ve portrayed AI in movies countless times. These fucking AI studies man…

    What’s next? Oh, lemme guess! “Studies show that GPT-69 will take your job and fuck your wife for you and convince her to kick you to the curb.” lmao miss me with this shit.

    AI can’t do fuck all right. It’s a glorified search engine that’s wrong half the time. What fucking use is a hammer if you can’t trust that the head isn’t going to fly off on your first swing?

    This bubble is going to pop and I will forever curse the name Altman every chance I get.

  • themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    49
    arrow-down
    1
    ·
    edit-2
    2 days ago

    It’s funny, in all those “AI kills the humans” stories, they always start by explaining how the safeguards we put in place failed. Few predicted that we wouldn’t bother with safeguards at all.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 day ago

      They usually start by explaining how they trained and motivated the computer to “kill people” in some extremely contrived situation. No peer review ofc.

      Anthropic explained: “The (highly improbable) setup… this artificial setup…"

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        AI isn’t “learning” shit — it’s just vomiting up a statistical facsimile of all the shit that’s ever been posted online.