Yo,

Wandering what the limit is when it comes to how many containers I can run. Currently I’m running around 15 containers. What happens if this is increased to say, 40? Also, can docker containers go “idle” when not being used - to save system resources?

I’m running a i7-6700k Intel cpu. Doesn’t seem to be struggling at all with my current setup at least, maybe only when transcoding for Jellyfin.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    Wandering what the limit is when it comes to how many containers I can run.

    Basically the same as the number of processes you can run.

    Use “docker stats” to see what resources each container is using.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    2
    ·
    54 minutes ago

    I have gone up to about 300-400 or so. Currently running about 5 machines averaging about 100 each.

  • Docker containers arnt virtual machines despite acting like them. They dont actually require compute resources to be sitting around doing nothing like a traditional vm cos they are essentially just a proxy between the kernal in the container and the kernal on the base machine.

    If the container isnt doing anything then it isnt consuming resources.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 hours ago

    A Docker container is essentially a process running on your machine. Just like any other process. It can be idle, stopped or hogging the CPU. You can use Docker constraints to limit resource use if you want to, memory, CPU and network to name a few.

    So, can you run 40 processes?

    Very likely. Probably 400 or 4000, depending on CPU usage and memory.

    I ran that particular CPU with 64 GB of RAM and used it to run multiple virtual machines, my main debian desktop and a VM specifically as a docker host, running dozens of instances of Google Chrome without ever noticing it slowing down.

    Then the power cable shortened out and life was never the same. That was six months ago, the machine was a late 2015 iMac running macos and VMware Fusion.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 hours ago

      I’ll add here that the “docker top” command allows you to easily see what kind of resources your containers are using.

      If you prefer a UI, Dozzle runs as a container, is super lightweight, requires basically no setup, and makes it very easy to see your docker resource usage.

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    As it was already said. Docker is not virtualization. The number of Containers you can run depends on the containers and what applications are packaged in them. I am pretty sure you can max out any host with a single container when it runs computational heavy software. And i am also pretty sure you can run on any given host thousands of containers when they are just serving a simple static website

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Correct on both counts, although it is possible to set limits that will prevent a single container using all your system’s resources.

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 hours ago

    Zero. It seems like software is increasingly expecting to be deployed in a container though, so that probably won’t last forever.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      While I understand the frustration of feeling like you’re being forced to adopt a particular process rather than being allowed to control your setup the way you see fit, the rapid proliferation of containers happened because they really do offer astonishing advantages over traditional methods of software development.

  • grimer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 hours ago

    Right now I have 32 active stacks running and a good number of them create at least one other container like a database. So I’m running around 60+ separate containers. The machine has maybe an i5 6500 or so in it with 32g of ram. I use unraid as the nas platform but I do all the docker stuff manually. It’s plenty fast for what I need so far… :)

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    You can’t really make them go idle, save by restarting them with a do-nothing command like tail -f /dev/null. What you probably want to do is scale a service down to 0. This leaves the declaration that you want to have an image deployed as a container, “but for right now, don’t stand any containers up”.

    If you’re running a Kubernetes cluster, then this is pretty straightforward: just edit the deployment config for the service in question to set scale: 0. If you’re using Docker Compose, I believe the value to set is called replicas and the default is 1.

    As for a limit to the number of running containers, I don’t think it exists unless you’re running an orchestrator like AWS EKS that sets an artificial limit of… 15 per node? I think? Generally you’re limited only by the resources availabale, which means it’s a good idea to make sure that you’re setting limits on the amount of RAM/CPU a container can use.