Back at the end of July, we had a discussion with CPU maker AMD and the topic of conversation was hybrid cloud. In the course of that conversation, one of
First about half of it is spent talking about AI garbage that’s irrelevant to pretty much everything. Their argument is essentially “the current off the shelf AI setups are built with ARM chips as their general purpose compute tying together the specialized accelerators doing the actual work” which might be true but doesn’t explain why that should continue to be the case. Sort of a correlation does not equal causation type thing.
Secondly, for like 99% of the companies out there doing cloud deployments this is all utterly irrelevant. Most businesses aren’t hyper focused on shaving clock cycles to the point where they’re obsessing about microarchitecture decisions impacting performance. The reality is for 99% of services I/O is going to be your bottleneck and no amount of twiddling with the CPU architecture is going to improve that in a meaningful fashion, and for the overwhelming majority of customers it doesn’t matter in the slightest. Sure your Amazons and Googles and maybe the fintech sector might care, but for your Walmarts and Bass Pro Shops it’s utterly irrelevant except maybe to shave some cost off a slightly cheaper AWS deployment.
As for the consumer market this is even more irrelevant. If you’re not in the market for an EPYC server currently none of this matters to you, which is a shame because the success of Apple with their ARM CPUs provides an opportunity to have a potentially interesting discussion about the relative technical merits of X86 vs. ARM and maybe even RISC-V. Technical merits this interview doesn’t really touch on either, it’s almost entirely a market focused piece with very little in terms of concrete “ARM beats x86 in this way” outside of a vague hand wavy “it has a more consistent micro architecture”.
Really I had two issues with the interview.
First about half of it is spent talking about AI garbage that’s irrelevant to pretty much everything. Their argument is essentially “the current off the shelf AI setups are built with ARM chips as their general purpose compute tying together the specialized accelerators doing the actual work” which might be true but doesn’t explain why that should continue to be the case. Sort of a correlation does not equal causation type thing.
Secondly, for like 99% of the companies out there doing cloud deployments this is all utterly irrelevant. Most businesses aren’t hyper focused on shaving clock cycles to the point where they’re obsessing about microarchitecture decisions impacting performance. The reality is for 99% of services I/O is going to be your bottleneck and no amount of twiddling with the CPU architecture is going to improve that in a meaningful fashion, and for the overwhelming majority of customers it doesn’t matter in the slightest. Sure your Amazons and Googles and maybe the fintech sector might care, but for your Walmarts and Bass Pro Shops it’s utterly irrelevant except maybe to shave some cost off a slightly cheaper AWS deployment.
As for the consumer market this is even more irrelevant. If you’re not in the market for an EPYC server currently none of this matters to you, which is a shame because the success of Apple with their ARM CPUs provides an opportunity to have a potentially interesting discussion about the relative technical merits of X86 vs. ARM and maybe even RISC-V. Technical merits this interview doesn’t really touch on either, it’s almost entirely a market focused piece with very little in terms of concrete “ARM beats x86 in this way” outside of a vague hand wavy “it has a more consistent micro architecture”.
Agreed. I think Nvidia using ARM for their GPU compute systems is more of a cost consideration.