• Chloë (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    6 hours ago

    Selfhost your LLM’s Qwen3:14b is fast, open source and answers code questions with very good accuracy.

    You only need ollama and a podman container (for openwebUI)

    • dyc3@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Frankly, I don’t think you seriously tested anything that you’ve mentioned here.

      Nobody’s using Qwen because it doesn’t do tool calls. Nobody really uses ollama for useful workloads because they don’t own the hardware to make it good enough.

      That’s not to say that I don’t want self-hosted models to be good. I absolutely do. But let’s be realistic here.