Pro@programming.dev to Technology@lemmy.worldEnglish · 3 days agoDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduexternal-linkmessage-square7fedilinkarrow-up13arrow-down152file-textcross-posted to: Technology@programming.dev
arrow-up1-49arrow-down1external-linkDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduPro@programming.dev to Technology@lemmy.worldEnglish · 3 days agomessage-square7fedilinkfile-textcross-posted to: Technology@programming.dev
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up19arrow-down1·3 days agoNo, they do what they’ve been programmed to do because they’re inanimate
minus-squareElectricblush@lemmy.worldlinkfedilinkEnglisharrow-up10·3 days agoA better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up3·3 days agoThey’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
minus-squareCousin Mose@lemmy.hogru.chlinkfedilinkEnglisharrow-up4arrow-down1·2 days agoRight? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.
No, they do what they’ve been programmed to do because they’re inanimate
A better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
They’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.