• anamethatisnt@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    Thanks for the insight. Kinda sad how selfhosted LLM or ML means Nvidia is a must have for the best experience.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.

      That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.