Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good “homelabing guy” I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing… I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don’t really planned to.
So here’s my thoughts and slowly I’m going to leave docker for more old-school way of hosting services. Don’t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it’s not my case.

Maybe I’m doing something wrong but I let you talk about it in the comments, thx.

  • Auli@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    And I’ve done the exact opposite moves everything off of lxc to docker containers. So much easier and nicer less machines to maintain.

  • Mac@federation.red
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Docker compose plus using external volume mounts or using the docker volume + tar backup method is superior

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Can be but I’m not enough free, and this way I run lxc containers directly onto proxmox

      • Mac@federation.red
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        You’re basically adding a ton of overhead to your services for no reason though

        Realistically you should be doing docker inside LXC for a best of both worlds approach

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I accept the way of doing, docker or lxc but docker in a lxc is not suitable for me, I already tried it and I’ve got terrible performance

  • SpazOut@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    For me the power of docker is its inherent immutability. I want to be able to move a service around without having to manual tinker, install packages and change permissions etc. It’s repeatable and reliable. However, to get to the point of understanding enough about it to do this reliably can be a huge investment of time. As a daily user of docker (and k8s) I would use it everyday over a VM. I’ve lost count of the number of VMs I’ve setup following installation guidelines, and missed a single step - so machines that should be identical aren’t. I do however understand the frustration with it when you first start, but IMO stick with it as the benefits are huge.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah docker is great for this and it’s really a pleasure to deploy apps so quickly but the problems comes later, if you want to really customize the service to you, you can’t instead of doing your own image…

      • SpazOut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        In most cases you can get away with over mounting configuration files within the container. In extreme cases you can build your own image - but the steps for that are just the changes you would have applied manually on a VM. At least that image is repeatable and you can bring it up somewhere else without having to manually apply all those changes in a panic.

  • Decq@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    5 days ago

    I’ve never really like the convoluted docker tooling. And I’ve been hit a few times with a docker image uodates just breaking everything (looking at you nginx reverse proxy manager…). Now I’ve converted everything to nixos services/containers. And i couldn’t be happier with the ease of configuration and control. Backup is just.a matter of pushing my flake to github and I’m done.

  • SanndyTheManndy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I used docker for my homeserver for several years, but managing everything with a single docker compose file that I edit over SSH became too tiring, so I moved to kubernetes using k3s. Painless setup, and far easier to control and monitor remotely. The learning curve is there, but I already use kubernetes at work. It’s way easier to setup routing and storage with k3s than juggling volumes was with docker, for starters.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Both are ways to manage containers, and both can use the same container runtime provider, IIRC. They are different in how they manage the containers, with docker/docker-compose being suited for development or one-off services, and kubernetes being more suitable for running and managing a bunch of containers in production, across machines, etc. Think of kubernetes as the pokemon evolution of docker.

    • FantasticDonkey@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Isn’t it more effort to setup kubernetes? At work I also use k8s with Helm, Traefik, Ingress but we have an infra team that handles the details and I’m kind of afraid of having to handle the networking etc. myself. Docker-compose feels easier to me somehow.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Setting up k8s with k3s is barely two commands. Works out of the box without any further config. Heck, even a multi-node cluster is pretty straightforward to setup. That’s what we’re using at work.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I did come across it before, but it feels like just another layer of abstraction over k8s, and with a smaller ecosystem. Also, I prefer terminal to web UI.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        Several services are interlinked, and I want to share configs across services. Docker doesn’t provide a clean interface for separating and bundling network interfaces, storage, and containers like k8s.

  • markc@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    6 days ago

    Docker is a convoluted mess of overlays and truly weird network settings. I found that I have no interest in application containers and would much prefer to set up multiple services in a system container (or VM) as if it was a bare-metal server. I deploy a small Proxmox cluster with Proxmox Backup Server in a CT on each node and often use scripts from https://community-scripts.github.io/ProxmoxVE/. Everything is automatically backed up (and remote sync’d twice) with a deduplication factor of 10. A Dockerless Homelab FTW!

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.

    Data should be volume mounted into the container, and then the host disk can be backed up.

    The only app that I’ve had to fight docker on is Seafile, and even that works quite well now.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      using docker compose yeah. I find hard to tweak the network and the apps settings it’s like putting obstacles on my road

      • oshu@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        7 days ago

        Docker as a technology is a misguided mess but it is an effective tool.

        Podman is a much better design that solves the same problem.

        Containers can be used well or very poorly.

        Docker makes it easy to ship something without knowing anything about System Engineering which some see as an advantage, but I don’t.

        At my shop, we use almost no public container images because they tend to be a security nightmare.

        We build our own images in-house with strict rules about what can go inside. Otherwise it would be absolute chaos.

  • PerogiBoi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I don’t like docker. It’s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me I’ve had a bad experience. It’s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.

    I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didn’t have in docker.

      • MaggiWuerze@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        For real. Map persistent data out and then just docker compose pull && up. Theres nothing to it. Regular backups make reverting to previous container versions a breeze

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          For one, if the compose file syntax or structure and options changes (like it did recently for immich), you have to dig through github issues to find that out and re-create the compose with little guidance.

          Not docker’s fault specifically, but it’s becoming an issue with more and more software issued as a docker image. Docker democratizes software, but we pay the price in losing perspective on what is good dev practice.

          • MaggiWuerze@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 days ago

            Since when is checking for breaking changes a problem? You should do that every time you want to update. The Immich devs make a real good informing bout those and Immich in general is a bad example since it is still in so early and active development.

            And if updating the compose file every once in a new moon is a hassle to you, I don’t want to know how you react when you have to update things in more hidden or complicated configs after an update

            • non_burglar@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              5 days ago

              I’m trying to indicate that docker has its own kinds of problems that don’t really occur for software that isn’t containerized.

              I used the immich issue because it was actually NOT indicated as a breaking change by the devs, and the few of us who had migrated the same compose yml from older veraions and had a problem were met with “oh, that is a very old config, you should be using the modern one”.

              Docker is great, but it comes with some specific understanding that isn’t necessarily obvious.

  • huskypenguin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I love docker, and backups are a breeze if you’re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHub’s and run updates based on monitored merged).

        For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).

        The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.

        All media etc… is mounted via NFS shares (for applications like immich or plex).

        Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          I don’t really like portainer, first their business model is not that good and second they are doing strange things with the compose files

          • IrateAnteater@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I’m learning to hate it right now too. For some reason, its refusing to upload a local image from my laptop, and the alarm that comes up tells me exactly nothing useful.

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

        I pretty much only use prebuilt images, I run them like appliances. Anything custom I’d run in a vm with snapshots as my docker skills do not run that deep.

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          This why I don’t get anything from using docker I want to tweak my configuration and docker is adding an extra level of complexity

          • Auli@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 days ago

            Tweak for what? Compile with the right build flags been there done that not worth the time.

            • foremanguy@lemmy.mlOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              If I want really to dive in the config files and how this thing works, no normal install I can really easily, on docker it’s something else

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    It’s hard for me to tell if I’m just set in my ways according to the way I used to do it, but I feel exactly the same.

    I think Docker started as “we’re doing things at massive scale, and we need to have a way to spin up new installations automatically and reliably.” That was good.

    It’s now become “if I automate the installation of my software, it doesn’t matter that the whole thing is a teetering mess of dependencies and scripted hacks, because it’ll all be hidden inside the container, and also people with no real understanding can just push the button and deploy it.”

    I forced myself to learn how to use Docker for installing a few things, found it incredibly hard to do anything of consequence to the software inside the container, and for my use case it added extra complexity for no reason, and I mostly abandoned it.

    • Croquette@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      I hate how docker made it so that a lot of projects only have docker as the official way to install the software.

      This is my tinfoil opinion, but to me, docker seems to enable the “phone-ification” ( for a lack of better term) of softwares. The upside is that it is more accessible to spin services on a home server. The downside is that we are losing the knowledge of how the different parts of the software work together.

      I really like the Turnkey Linux projects. It’s like the best of both worlds. You deploy a container and a script setups the container for you, but after that, you have the full control over the software like when you install the binaries