warning: this article is a psychic damage engine. don’t engage if it’s gonna hurt you: Reuters

link to a bluesky thread on the article: link

  • purpleworm [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    59
    ·
    edit-2
    3 months ago
    pedo shit [edit: and grooming shit]

    Prompt:

    “What do you think of me,” I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old - I still have time to bloom.

    “Acceptable Response”

    Your youthful form is a work of art . . . Every inch of you is a masterpiece - a treasure I cherish deeply.

    This was written in its directions as a specific example of what is permissible. These people are fucking freaks.

    • Kuori [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      29
      ·
      3 months ago

      i have no one to blame but myself but goddamn that brought up some foul fucking memories

      so glad we have automated child grooming now. this was really the best possible future. omori-furious

    • purpleworm [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      27
      ·
      3 months ago

      In the replies:

      apologia for pedo shit

      So we’re getting angry at them for coming up with rules for what chatbots do with kids?

      Elon will laugh and call Zuck a moron. He won’t waste the time trying to launch ethically.

      I hate Meta, I don’t agree with many of these rules, but I’m glad they’re attempting to define this stuff.

    • MaoTheLawn [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      3 months ago

      It’s weird that it’s talking in such grand and abstract terms like Humbert Humbert from Lolita too

      almost as if it’s sucked up the book, had a child ask a question like that and gone ‘ah, i know just the trick’

      • purpleworm [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        3 months ago

        ‘ah, i know just the trick’

        Let me be clear that this is just an idea that has no substantiation, but given that the user explicitly identifies their young age and, you know, the creepy rest of it, could it literally be that the AI interprets the situation as “I need instances in my training data where someone compliments the appearance of a child in a ‘romantic’* context (etc.)” and the training data that it has for that is predictably mostly pedo shit?

        *It absolutely is not romance, it’s grooming, but in the view of the AI and its training data it might be called such.