

I like NC and use it primarily for file sync. I think this would create a fragile maintenance nightmare for the sake of saving a few MB of storage and memory.


I like NC and use it primarily for file sync. I think this would create a fragile maintenance nightmare for the sake of saving a few MB of storage and memory.
You can do a sanoid sync to another zpool or dataset on the same machine or a remote host, they behave the same. It’s replicating that dataset on the other machine, then sending the snapshots after that point over via zfs send. You can instruct sanoid to prune those snapshots after the send and start new ones for the next send, or just accumulate them so you have points in time to revert to.
IIRC, you can send a zfs snapshot to a file, but I can’t recall how to do that, so AFAIK, you can’t just send it to a file based service like Onedrive. You can use a service like zfs.rent and send them a harddrive with your base sync on it (encrypt it) and then once they’ve brought it online, you can sync to that. Best to test out your methods with the drive hooked up locally.
I know it’s anathema to Lemmy, but the best help you’ll get is Claude where you can paste the errors in and have it sort it out for you as you troubleshoot. It’s pretty good at shit like that.
If you’re already running ZFS, sanoid would be an option.


I find some of the workflows in it a bit strange, like not having an Add button on the list of host proxies, it’s a separate menu item on the left which weird. And the way you request a SSL cert by hitting OK and then you get a popup asking if you want a cert, and you’d better have already set your options for how you want the cert, but if you create a host without a cert you have to go through all the options again and check them because it doesn’t keep track of your preference.
IDK, in any case it fixed a bunch of problems I was having with NPM so it has that going for it, which is nice.


Take a look at Zoraxy or NPM.


Oh, people will keep using it no matter how much you warn them.
Proxmox-helper-scripts is a perfect example. They’ll agree with you until that site comes up, and then its “it’ll never, ever get hacked and subverted, nope, can’t happen, impossible”.
Wankers.


Don’t look too deep then.


Well, they killed the last one. Seems like a credible threat.


Good, maybe the Vatican can pick up the US tab for outstanding payments to the UN. They sure as fuck can afford it.
I run on dual Xeon R410s with 128Gb of RAM (2013?). Got them for free, on Kijiji. Runs Proxmox on both and a pile of VMs. Dual GB nics, 6 SAS bays, HBA in IT mode for ZFS. Has iLo for OOB management, or whatever the Dell equivalent is.
I mean, it’s not fast, but each server has 24 cores and I can chunk PDF files fairly quickly for RAG on 10 cores and have plenty for mail server, Nextcloud, K8S running some side hustle apps, etc, etc. Kind of a noisy prick when it winds up though.


Thanks for the feedback. That was precisely my worry about outlaying that money and not being happy with the result.


I’ve really been mulling one of those over with 128GB. I’m on Claude Max and Cerebras $50 so I’m using a good amount of $200/mo for coding and Openclaw. Is it worth it for light coding, or are you only doing SD with it?


This was the channel I was going to suggest. A lot of what he shows is pretty pricey, but some would make sense if you weren’t too concerned about speed.


Framework desktop?
I would use their LXC install, it’s much more flexible. It does not need to be local but it does simplify things like email. I had to put a bit of effort into getting it to be able to connect to IMAP mailboxes to process, but it wasn’t any more than just asking it to get the necessary libraries etc. But things like that are why using it as an LXC is a better choice. It might be able to do that as a docker, but there’s potential problems with network connectivity and docker in docker issues.
You can also firewall that LXC off without having to mess up your own workstation, as well as snapshot it and back it up.
And the first thing I would do is have it keep token budgets when you build tasks, and report it’s token use to you every hour or two. It takes some time to learn how to structure reminders and task processing to not create loops that eat up scads of tokens. Don’t ask me how I know.
But holy hell, can it be useful.
zfs.rent
I set up Pulse recently and the ease of setup and great UI/UX is impressive. Really liking it.
Of course, there’s some AI bullshit if you want to opt in, but it’s not enabled by default.
I tried it a couple years ago and it wasn’t very successful. But maybe that’s changed.
Mark of The Deal