Talk nerdy to me :D

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    ollama is the usual one, they have install instructions on their GitHub i think, and a model repository, etc

    You can run something on your cpu if you don’t care about speed, or on your gpu although you can’t run any more intelligent model without a decent amount of vram

    For models to use, I recommend checking out the qwen distilled versions of deepseek r1