With the recent Proxmox 9 release, many of us have the upgrade ahead or already done. What about you, and how do you generally approach updating your services? Which other updates are you looking forward to or is it just an annoying chore?
Also the usual - let us know what you are currently working on, what problems you are encountering and what you are excited about.
As for updates, I update my machines semi-regularly with Ansible. The Proxmox 9 update was unspectacular (good thing!), I just had to change some things in my Promox-post-install automation (nag bar removal and package sources). I still plan to get a merge request based update process for my containers as mentioned here but I’m just not there yet. That guide was also posted on reddit recently and got some traction.
I also spent some time yesterday to organize my nginx logs, they basically all got their own folder in /var/log/nginx
with their own access log file by adding access_log /var/log/nginx/$server_name/access.log vhost_combined;
to each config. Error log file paths can’t contain variables so I kept them in the default file so far.
Recently enabled wireguard (easy setting in my FritzBox router) and stopped exposing some of my services to the internet. That process isn’t finished yet though as I’ll need to switch to wildcard certificates in order to keep valid SSL for the now local-only services.
I use Traefik as reverse proxy for local only services with let’s encrypt certificates. Just needed to a) register the subdomains and b) expose port 80 for the challenges without anything being served on that port.
Wireguard into my network and local DNS via Pihole to ensure proper local IPs. Works like a charm.
I need to check what exactly I need to expose. I had 80 and 443 exposed but limited the access to local IPs in nginx like this:
allow 192.168.x.0/24; # Allow FritzBox subnet allow 10.0.0.0/24; # Allow OpnSense subnet deny all; # Deny all other IPs
I still have some services I want to expose so generally I’ll keep the ports open.
Finally got a drive to replace a deader in my zpool. Raid10 ftw
I have never understood the hype surrounding proxmox. What makes proxmox so irreplaceable?
In the virtualisation world you have the expensive big boy who everyone now hates ESX by Broadcom (was VMware), the expensive wannabe big boy that everyone hates Hyper-V by Microsoft, and a gazillion others that use Qemu or zen as a base and puts a shiny coat of UI over it.
Proxmox is in that last category. A pretty interface over an open source underlay at a decent price (if you want to pay the subscription).
Super reliable virtualization and management features. Snapshots, auto backups, live migrations across physical hosts, high availability are what I like the most.
I’ve tried it a few times, never stuck. I guess it’s just convenience, it is a well integrated piece of software, especially if you use both LXC and VMs. Personally I keep using virt-manager and Cockpit.
I find VMs to be unbearably sloe compared to a container. They just feel so heavy. I get the extra security layer, is that really why people are doing it or is there some other reason?
The easy ui is good for those who aren’t living in the terminal all the time.
I used proxmox for nearly 8 years before switching to only containers. It was fine.
Extra security and full isolation with its own kernel, so you can load kernel modules and such.
Also can run Windows in a VM when needed, or MacOS.
VMs are basically just as fast as containers, and the RAM overhead from a lightweight Linux VM is very small.
Being able to choose the OS and kernel is also important. I would not want my hypervisor machine to load GPU kernel modules, especially not on an older LTS kernel (which often don’t support the latest hardware). Passing the GPU to a VM ensures stability of the host machine, with the flexibility to choose whatever kernel I need for specific hardware. This alongside running entirely different OSes (like *BSD, Windows :(, etc) is pretty useful for some services.
Portability, isolation, the ability to run pretty much anything inside. They do consume more resources, but if they’re that much slower then there’s probably something wrong in your setup.
Not everything runs in a container.
Same here, though more out of lack of control over the host. Libvirt works on basically any distro, and you can easily configure whatever Linux distro you like best for running it. I can’t configure my boot process the way I want on Proxmox (at least not without learning/sidestepping its “convenience” tools/setup).
I moved to proxmox earlier this year and it quickly became a huge deal for me.
One nice thing is that I can easily create lxc containers for each service that has exactly what that service needs. Each service lives in a container that acts a lot like bare metal.
A second nice thing is it’s really easy to administer everything remotely. All your machines end up accessible through the proxmox interface, and you can hop into virtual machines or lxc containers via the web.
A third thing is you can easily handle hot standby and backups through an easy UI.
Totally changed the game for me.
If you know Linux or are willing to learn, it is very easy to use. If not, it’s going to be a bit of a chore. Some things are just easier to do via CLI.
I finally got Caddy’s TLS working with a custom module to handle DNS challenges. Turns out all I had to do was wait 10-15 minutes and everything would sort itself out.
Now on to the next puzzle. I started with Caddy in a Docker container and it’s working as intended. Now I want to replicate that in Rootful Podman Compose but I’m running into an issue. With the exact same setup (docker-compose.yml, Dockerfile and Caddyfile) I can get my TLS cert without issue but I can’t seem to connect to my website from any external browser. Not through my domain name or even through my home’s local network.
Once I figure out how I can access my website, I’ll be one step closer to where I want to be. Next will be to get Rootless Podman working, then I can finally set up the file server and kiwix instance instead of the test page I am currently using.
After that, I can finally spend time doing what I want to do and focus my time looking into the Gemeni Protocol.
Down the road I’ll look into hosting an IRC server and Snikket instant messenger but that’s super low priority. I like tinkering with my Raspberry Pi and my constant backup/restores wouldn’t be good for reliability for such services.
I’m too lazy to spin up docker containers and config for stuff that would make my life a bit better, but not enough to warrant the hassle… Like for example a finance management software that can hook into my bank. Or document management with automatic email imports etc.
Like for example a finance management software that can hook into my bank
What software would that be? I’ve been looking for a viable self-hosted alternative to Mint (and now Monarch Money) since forever.
Firefly III is the one I had on my radar
I’m also interested. I migrated from mint to Credit Karma… what a complete shit show. I really miss ooold mint.
Upgraded to Debian Trixie two days ago. Runs flawlessly
sops-nix + rootless podman turns out to be much trickier than I imagined. Spent like 2 days over this shit just to get it in the central config when I could have just manually loaded the config files and change the permission… I eventually solved it by running
rootlesskit
in the activation script to copy the decrypted file into a temporary folder and changing the permission to the correct sub-user. Not worth the time though.I am currently in the final phase of building my first own built NAS.
(I have an oooooold Intel NAS, that I don’t really use anymore…)
I need to populate the case with storage drives, I need to add an Intel GPU, a 10gbit NIC, and possibly add an HBA to add two SSDs for VM storage.
Currently I have a:
- Jonsbo N4 case
- Asrock B550m Pro4
- AMD Ryzen 4600G
- 32GB RAM
- Kingston boot SSD
- Corsair SF750 PSU
I am running TrueNAS on it, that was just installed to make sure that it is working, but I am planning on running it going forward, as I am mostly looking to run the server as a filserver.
I’ve just noticed that proxmox 9 is already available. I will check the procedure before upgrading my machine. Any suggestions regarding that?
I just followed their instructions and on 2 of the nodes in my cluster, I migrated all VMs/LXCs off and then did the upgrade and they went through without a hitch. For the last one, I just YOLO’d it and powered off the VMs/LXCs and upgraded it and it also went through without a hitch.
One thing I did find interesting was the systemd-boot packages needed to be removed and were on 2/3 of the machines. I basically intentionally keep their config as close to identical as possible, so I have no clue why it was only needed on 2 of them.
Just that, they have a detailed description of the upgrade routine. Make backups :)
I just upgraded my Proxmox to 9 last night, too!
…from 7, 'cause that’s how long I’d been neglecting it. 😅
I’ve also been trying to get my old dual-Opteron server working again, after having abandoned it a couple of years ago due to what I thought was a bad motherboard (IIRC, it wasn’t turning on at all). I was gonna buy a new motherboard since I happened to run across a cheap Ebay listing, but I decided to double-check the existing one first, and lo and behold, it booted!
Then I tried to update the ancient Proxmox on it from 6 to 7, and now it still turns on but doesn’t successfully boot.
Also, I can’t get it to boot from a flash drive for some reason, so I think I might have to take out the SSD, reinstall Proxmox on it from a different system, and then put it back in.
DC my server is at is shutting down so I have to bring everything home. Conveniently I just got hooked up with symetric 1G fiber so that’s not too much of a problem now thankfully.
Currently exploring docker swarm as a method of using one of my external VPSs to route all external traffic though it to my hardware at home on my tailnet.
Swarm isn’t required for this but figured I’d play around with it.