• laranis@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    2 hours ago

    People hating on this overly agreeable stance from LLMs, but in reality the executives pouring money into this stuff want “yes men” from their subordinates, including their AI subordinates.

    It is absolutely a feature.

  • tuckerm@feddit.onlineOP
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    I’m posting this because it’s a great example of how LLMs do not actually have their own thoughts, or any sort of awareness of what is actually happening in a conversation.

    It also shows how completely useless it is to have a “conversation” with someone who is just in agreeability mode the whole time (a.k.a. “maximizing engagement mode”) – offering up none of their own thoughts, but just continually prompting you to keep talking. And honestly, some people act that way too. And other kinds of people crave a conversation partner who acts that way, because it makes them the center of attention. It makes you feel interesting when the other person is endlessly saying, “You’re right! Go on.”

    • tuckerm@feddit.onlineOP
      link
      fedilink
      English
      arrow-up
      25
      ·
      20 hours ago

      Unfortunately, I’m not sure about that. Plenty of people who use ChatGPT end up thinking that it is sentient and has its own thoughts, but that’s because they don’t realize how much they are having to drive the conversation.