I’ve been generally running various different ways of backing up files to my NAS (which then backs up to other locations…) - mostly syncthing for photos and large collections of files, but I tend to use rsync to push out config backups to the NAS once something’s working.

But, the NAS is only powered up a few times a day (to save on electricity costs), which is fine for manual pushes, but makes scheduling backups a bit tricky.

It dawned on me that it might be better for the NAS to pull the files via rsync instead of pushing them.

Anyone tried this route and have any advice?

  • notabot@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    21 hours ago

    The big difference between pull and push is which system has keys to access the other, and what an attacker could do with them. With your home network you might ultimately decide this isn’t too important, but it’s worth at least thinking about anyway.

    In a push setup, each machine has some way (likely an SSH key) to authenticate to the NAS and push backup files to it. Each server has a different key to access a different path on the NAS, so if a server is compromised the attacker only gets access to that part of the NAS data, and if the NAS gets compromised, the attacker can’t connect to anything but has access to the encrypted backups (you do encrypt the backups you care about, right?). This limits how much extra data the attacker can read, but has the downside you mentioned.

    In a pull setup, the NAS has to have a way to connect to each server, typically as root for file access permissions. This means that if a server is compromised the attacker doesn’t gain a way to access even a limited portion of the NAS, but if the NAS is compromised they gain access to keys to root access on every server, which is likely catastrophic.

    A compromise solution can work. Have each server back up to a local file, then give the NAS permission to retrieve only that file, rather than root access. Whilst rsync isn’t going to work for creating the single file backup, something like borg or restic would. This does mean you need more disk space on each server, but it also means that the server doesn’t need direct access to the NAS, and the NAS only needs unpriviledged access to each server, mitigating the risk of a compromise.

    • dengtav@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      I also thought about this, but instead of letting the NAS pull the backups, just let the NAS ping the local machine whenever it gets powered on.

      This way, the local machine would know, when it’s time to push.

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        That’s certainly an option, but depending on how paranoid you are that still typically means that a compromised server can overwrite all of its backup images on the NAS, which could leave you in trouble. If you can configure your NAS to only allow creation of new backups but not allow changing old ones, you might be ok.

    • SayCyberOnceMore@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      Hey, so good points there thst I hadn’t considered at the time, I was only think of data… good point about the SSH keys, which is exactly what I would’ve done.

      So, yeah, local backups on each device (kinda a good idea anyway) and then restricted pull from the NAS… nice…

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    The one that’s offline more often manages the schedule.

    In my case the NAS is on 24/7 so the other machines push backups whenever they’re on.

    In your situation the NAS is on randomly, so the NAS pulling backups will most likely work better

    • SayCyberOnceMore@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      Yes, that was my thoughts - or just have hourly backups where some will work, some won’t…

      Bit messy, but simple.

  • nesc@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 hours ago

    Depends, in your case pull works. There is no universal answer here.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    18 hours ago

    I still backup my files the most basic way, that is, create an archive locally, connect external storage and copy it there. Then disconnect external storage. The archive is made onto a separate internal drive and I keep the most recent one there, so I don’t even need the external one for minor accidents.

    I think only once in the last decade or so have I wanted (but never needed) to pull something back from external, but it’s nice to know it’s there.

    The main downside to this method is that it doesn’t de-duplicate, so keeping several takes a lot more space that it would do otherwise.

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    I have an LXC that pulls files to it as backups using rsync, and pushes backups to a remote location using borg.

    Neither pushing nor pulling has any effect on the integrity of the backup, so just do which makes your life easier. I’m doing both because managing all of my backups from a single location is just easier.

    • SayCyberOnceMore@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      So you’re effectively using the LXC as a just a backup traffic coordinator?

      Or, is that on a NAS also keeping a local copy?

      • drkt@scribe.disroot.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        The LXC also has storage attached and houses backups, but it’s not served in any accessible way. I just use SFTP if I need to pull some files. The off-site is for if I somehow destroy both my live copy and the backup copy.