

There’s always dookie in the banana stand?


There’s always dookie in the banana stand?


Oh, we all know what’s in the Russian kompromat, Donnie, and what you really really want is the Nobel Piss Prize.
The only reason I won’t piss on your grave once you’re gone is because you’d no doubt enjoy it.


No worries mate, we can’t all be experts of every field and every topic!
Besides there are other AI models that are relatively small and depend on processing power more than RAM. For example there’s a bunch of audio analysis tools that don’t just transcribe information but also diarise it (split it up by speaker), extract emotional metadata (e.g. certain models can detect sarcasm quite well, others spot general emotions like happiness or sadness or anger), and so on. Image categorisation models are also super tiny, though usually you’d want to load them into the DSP-connected NPU of appropriate hardware (e.g. a newer model “smart” CCTV camera would be using a SoC that has NPU to load detection models into, and do the processing for detecting people, cars, animals, etc. onboard instead of on your NVR).
Also by my count, even somewhat larger training systems such as micro wakeword training, would fit into the 196MB V-Cache.


AI workflows aren’t limited to LLMs you know.
For example, TTS and STT models are usually small enough (15-30MB) to be loaded directly into V-cache. I was thinking of such small scale local models, especially when you consider AMD’s recent forays into providing a mixed environment runtime for their hardware (GAIA framework that can dynamically run your ML models on CPU, NPU and GPU, all automagically)


Disappointing but not unexpected. Most Chinese companies still work on the “absolute secrecy because competitors might steal our tech” ideology. Which hinders a lot of things…


What, you don’t have a few spare photonic vacuums in your parts drawer?


Well, yeah, when management is made up of dumbasses, you get this. And I’d argue some 90% of all management is absolute waffles when it comes to making good decisions.
AI can and does accelerate workloads if used right. It’s a tool, not a person replacement. You still need someone who can utilise the right models, research the right approaches and so on.
What companies need to realise is that AI accelerating things doesn’t mean you can cut your workforce by 70-90%, and still keep the same deadlines, but that with the same workforce you can deliver things 3-4 times faster. And faster delivery means new products (let it be a new feature or a truly brand new standalone product) have a lower cost basis even though the same amount of people worked on them, and the quicker cadence means quicker idea-to-profits timeline.


It actually makes some sense.
On my 7950X3D setup the main issue was always making sure to pin games to a specific CCD, and AMDs tooling is… quite crap at that aspect. Identifying the right CCD was always problematic for me.
Eliminating this by adding V-Cache to both CCDs so it doesn’t matter which one you pin it to is a good workaround. And IIRC V-Cache also helps certain (local) AI workflows as well, meaning running a game next to such a model won’t cause issues, as both gets its own CCD to run on.
Well SOMEONE keeps voting for them enough to win. And Trump did win then popular vote this time around.