Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?
I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.
I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.
Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…


I’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything, but in an ecosystem with less portability and tooling?
I don’t like how everything is docker containerized.
I already run proxmox, which containerizes things by design with their CT’s and VM’s
Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn’t make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox’s system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.
My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.
A docker system just adds extra headway that isn’t needed. So yes, just give me a standard installer.
Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.
No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.
Like… you don’t like that the industry has adapted this efficient, portable, interchangeable, flexible, lightweight, mature technology, because you prefer the one that is heavier, less flexible, less portable, non-OCI compliant alternative?
are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?
I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for).
I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.
One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn’t good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.
For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT’s do as they no longer share resources). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).
edit: clarification and general visibility so it wasnt bunched together.
If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.
I mean that’s exactly what docker containers do but more efficiently.
I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.
That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.
Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.
Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.
Option A is far more lightweight, and becomes a more attractive option the more services you add.
And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.
Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.