• 17 Posts
  • 833 Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle









  • These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns

    lulzwut? LLMs aren’t internalizing jack shit. If they exhibit a bias, it’s because of how they were trained. A quick theory would be that the interwebs is packed to the brim with stories of “all in” behaviors intermixed with real strategy, fiction or otherwise. I speculate that there are more stories available in forums of people winning doing stupid shit then there are of people losing because of stupid shit.

    They exhibit human bias because they were trained on human data. If I told the LLM to only make strict probability based decisions favoring safety (and it didn’t “forget” context and ignored any kind of “reasoning”), the odds might be in its favor.

    Sorry, I will not read the study because of that one sentence in its summary.


  • I’ll never drink again, but there are some days still that I wish my mind could be as numb as it was while I was a raging alcoholic. That thought is usually replaced with remembering how shitty I always felt and how I didn’t give a fuck about anything. Life was a blur.

    A mostly clear mind and recovering body is a very good thing. Daily stress is easily managed with regular exercise and chronic anxiety and depression is only a tiny fraction of what it once was. It’s a good life now.

    I believe the lifestyle changes not only lengthened my life, but it also stretched out my perceived time as well.




  • I would tweak that a hair and tell people just to make an account somewhere and observe for a bit. Lemmy can have some very distinct groups that reside on very specific instances. Or not. It’s a “pick your adventure” kind of scenario, IMHO.

    It took about six months or so for me to settle into .ca after bouncing around a bit. It’s not really a pain to switch instances, but I personally like my chat history in one spot and I like the concept of a ‘home instance’.

    Depending on your client and your settings, your feed could have a bias that leans in the direction of the posts on your home instance, so that is something of note. Not saying that is bad or good, it just is what it is.



  • When I use it, I use it to create single functions that have known inputs and outputs.

    If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.

    I always do a line-by-line analysis of what the AI is suggesting.

    Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and “reasoning” can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)