• IngeniousRocks (They/She) @lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      A Relatively recent gaming-type setup with local-ai or llama.cpp is what I’d recommend.

      I do most of my AI stuff with an rtx3070, but I also have a ryzen 7 3800x with 64gb RAM for heavy models where I don’t so much care how long it takes but need the high parameter count for whatever reason, for example MoE and agentic behavior.

    • GeneralDingus@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I’m not sure what you mean by ideal. Like, run any model you ever wanted? Probably the latest ai nvidia chips.

      But you can get away with a lot less for smaller models. I have the amd mid range card from 4 years ago (i forget the model at the top of my head) and can run text, 8B sized, models without issue.