• IngeniousRocks (They/She) @lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    4 days ago

    A Relatively recent gaming-type setup with local-ai or llama.cpp is what I’d recommend.

    I do most of my AI stuff with an rtx3070, but I also have a ryzen 7 3800x with 64gb RAM for heavy models where I don’t so much care how long it takes but need the high parameter count for whatever reason, for example MoE and agentic behavior.