• Alphane Moon@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      Thanks! Will give it a go.

      I initially went with the local approach, but I sort of gave up on it due to subpar results and because I have no plans to upgrade my 3080 10GB in the next ~2 years.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        18 hours ago

        Oh a 3080, perfect.

        The way it works with ik_llama.cpp is the ‘always run’ parts of the LLM will live and run on your 3080 while the sparse, compute light parts (the FFN MoE weights) stay in RAM and run on the CPU. For preprocessing big prompts, it utilizes you 3080 fully even though the model is like 45-55GB, and for the actual token generation they ‘alternate’, where your 3080 and CPU will bounce between being loaded. Hence having a compute-heavy 3080 is good, and 10GB is the perfect size to hold the dense parts and KV cache for GLM 4.5 Air.

        ik_llama.cpp also uses a special quantization format thats better compressed than your typical GGUFs.

        What I’m getting at is running a new 110 billion parameter model vs an older 8B one is like night and day, but you really do need a specialzied software framework to pull it off.


        Also, you can test GLM 4.5 Air here for free to kinda assess what you’re shooting for: https://chat.z.ai/

        The quantized version you run will be slightly dumber, but not by much.

        • Alphane Moon@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Pretty sure this sort of hybrid approach wasn’t widely available the last time I was testing local LLMs (8-12 months).

          Will need to do a test run. Cheers!

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            18 hours ago

            It was, its just improved massively, and this specific library/fork is very obscure, heh. Good luck!

            It’s also very poorly documented, so feel free to poke me if you run into something. I can even make and upload a more optimal quantization for you since I’ve already set that up for myself, anyway.