• 0 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle





  • Eh, that’s not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we’re talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.


  • Depends on the model/provider. If you’re running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

    With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.





  • I hear you, but you counter is a little disingenuous. A city isn’t your pack, your social circle is. And that’s probably smaller than 42. Just because you’re in a city of millions doesn’t mean you directly interact with them in a social way, we are still animals of small social groups, even if we live in a global space of billions. The alpha male stuff is still bunk science trash of course.










  • danielbln@lemmy.worldtoMicroblog Memes@lemmy.worldI know what you're doing!
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    7 months ago

    I don’t know what animals you’ve had, but my cat cleans herself (including paws) pretty much right after leaving the litter box, and continues to clean herself all throughout the day.

    Wait until I’ll tell you what human children do with their hands before proceeding to put them all over the place.

    I bet your place looks like a Dexter kill room.