• anamethatisnt@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Had a discussion with @brucethemoose@lemmy.world touching on this over at !technology@lemmy.world yesterday. (https://lemmy.world/post/23245782 )
    Well, inexperienced me asked bruce questions to be exact.
    The most interesting part for me would be how the rumored clamshell ARC gpus could upset the balance if the price is right.
    If a 24gb b580 or 32gb b770 for a much lower price then nvidia/amd offerings is available how would that affect market share and software development in the field?

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 day ago

      An important thing to note is Intel does not have an enterprise class GPU anymore, and appear to have abandoned most plans for such a thing.

      So while consumer GPU inference is great, and would seed support for their future laptop/desktop IGPs, Intel is not in the same boat as AMD anymore, who’s consumer efforts would seed support for the enterprise MI300X.

      I see basically zero chance of Intel supplanting CUDA for this reason, especially if they don’t foster cooperation with anyone else.

    • Alphane Moon@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I can’t speak for the nitty-gritty details and enterprise-scale technology, but from a consumer perspective (for local ML upscale and LLM using both proprietary and free tools), Nvidia clearly has the upper hand in terms of software support.

      • anamethatisnt@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        How cheap must rivalling high vram offerings be to upset the balance and move devs towards Intel/AMD?
        Do you think their current platform offerings are mature enough to grab market share with “more for less” hardware or is the software support advantage just too large?

        • vzq@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          They need to be substantially cheaper and (more importantly) they need loads more memory.

          The problem is that everyone (chiefly nvidia, but not only) is afraid to hurt their professional offerings by introducing consumer grade ML cards. They are not afraid of Joe having to use a smaller model to do AI on his security cameras, they are afraid of large companies ditching all their A100 cards for consumer equipment.

          So they try and segment the market any way they can think of and Joe gets screwed.

          It’s classic market failure really.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 day ago

            The bizarre thing about this is that AMD’s workstation card volume is comically small, and Intel’s is probably nonexistant.

            On the high end… Intel literally discontinued their HPC GPUs. The AMD MI300X is doing OK, but clearly suffering from a lack of grassroots software support.

            WTF are they afraid of losing?

        • Alphane Moon@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          From my limited consumer-level perspective, Intel/AMD platforms aren’t mature enough. Try looking into any open-source/commercial ML software aimed at consumers, Nvidia is guaranteed and first class.

          The situation is arguably different in gaming.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 day ago

            Intel is not as bad in LLM land as you’d think. Llama.cpp support gets better every day.

            Nvidia may be first class, but in this case, it doesn’t matter if the model you want doesn’t fit in VRAM. I’d trade my 3090 for a 48GB Arc card without even blinking, even if the setup is an absolute pain.

            • brucethemoose@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.

              That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.