Only because AMD/Intel aren’t pricing competitively. I define “best experience” as the largest LLM/context I can fit on my GPU, and right now that’s essentially dictated by VRAM.
That being said, I get how most wouldn’t want to go through the fuss of setting up Intel/AMD inference.
An important thing to note is Intel does not have an enterprise class GPU anymore, and appear to have abandoned most plans for such a thing.
So while consumer GPU inference is great, and would seed support for their future laptop/desktop IGPs, Intel is not in the same boat as AMD anymore, who’s consumer efforts would seed support for the enterprise MI300X.
I see basically zero chance of Intel supplanting CUDA for this reason, especially if they don’t foster cooperation with anyone else.