• eatCasserole@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    1 day ago

    I saw a study recently that found, when using “AI”, people are more likely to lie/cheat/steal.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 day ago

      I wonder if that study accounted for a self selection bias. Could it just be that people who use AI were already people who lie/cheat/steal more often?

      • eatCasserole@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 day ago

        I had the same thought, but no, it was a controlled experiment where participants were given tasks that may or may not involve an AI tool, and the ones involving AI came back with less honest answers.

        • shawn1122@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          22 hours ago

          What was the speculated rationale in the discussion? Was it that humans feel less accountable if the work is done by AI?

          • eatCasserole@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Basically, yeah. I found the article I read.

            “Using AI creates a convenient moral distance between people and their actions — it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans,”

        • k0e3@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 hours ago

          And by lie, is that with intent? Or spread misinformation without knowing?

          • eatCasserole@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            With intent. Here’s an example:

            In one experiment, participants would roll dice, report the number that turned up either honestly or dishonestly, and then they would get paid the same amount, with bigger numbers meaning a higher payout. Some participants were given the option of telling the number to an AI model — again, either honestly or dishonestly — which would then report the dice outcome to researchers.