• Alphane Moon@lemmy.worldOPM
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    From my limited consumer-level perspective, Intel/AMD platforms aren’t mature enough. Try looking into any open-source/commercial ML software aimed at consumers, Nvidia is guaranteed and first class.

    The situation is arguably different in gaming.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 day ago

      Intel is not as bad in LLM land as you’d think. Llama.cpp support gets better every day.

      Nvidia may be first class, but in this case, it doesn’t matter if the model you want doesn’t fit in VRAM. I’d trade my 3090 for a 48GB Arc card without even blinking, even if the setup is an absolute pain.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.

        That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.