Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
It really depends on how you quantize the model and the K/V cache as well. This is a useful calculator. https://smcleod.net/vram-estimator/ I can comfortably fit most 32b models quantized to 4-bit (usually KVM or IQ4XS) on my 3090’s 24 GB of VRAM with a reasonable context size. If you’re going to be needing a much larger context window to input large documents etc then you’d need to go smaller with the model size (14b, 27b etc) or get a multi GPU set up or something with unified memory and a lot of ram (like the Mac Minis others are mentioning).
This would be a great potential improvement in UX for streaming sports feeds for sure - not having to navigate web pages and start / manage streams manually etc. Does anyone know if this is possible for sites serving these streams like FirstRowSports or StreamEast etc?
Yeah I use voyager pretty much exclusively on my iPhone so maybe I should request a feature like that there? Seems like it would be something that many people would appreciate. Not sure why I end up seeing posts with -10, -15 votes… Those are generally trash haha
Looks like it now has Docling Content Extraction Support for RAG. Has anyone used Docling much?