LLM inference requires very specific servers that aren’t good for much else (in terms of what companies usually do), though. And go ‘obsolete’ even more quickly.
I guess what I’m saying is the premise would be pretty flimsy for a more general upgrade.
LLM inference requires very specific servers that aren’t good for much else (in terms of what companies usually do), though. And go ‘obsolete’ even more quickly.
I guess what I’m saying is the premise would be pretty flimsy for a more general upgrade.