On a deeper level than small talk, of course.

  • LangleyDominos [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Given the way LLMs function, the will have a hard time with therapy. Chat GPT’s context window is 128k tokens. As you chat, your prompts/replies add up and start filling the context window. GPT also has to look at its own responses for context. That fills up the window as well. LLMs suck with nearly empty context windows and nearly full context windows. When you’re close to having a full context window, it will start hallucinating and having problems with responses. Eventually it will only be able to focus on parts of your conversations because you’ve blown past the 128k token mark.

    The ways to mitigate this problem have to be done by the user and they disrupt therapy.