• 1 Post
  • 22 Comments
Joined 1 month ago
cake
Cake day: January 28th, 2025

help-circle
  • Krik@lemmy.dbzer0.comtoSelfhosted@lemmy.worldProxmox setup - help needed
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 hours ago

    Do I use BTRFS or ZFS? I tend to use ZFS because of its advantages when making backups. What would you do?

    Usually VMs are usually I/O starved therefore I would try to go as lightweight as possible and chose Ext4 or XFS (depending on what the VM is used for). The VMs can be backed up whole by Proxmox. You have more than enough space to do that and it’s considerably easier to set up. And honestly how big could the containers and VMs be? I guess the containers are 50-200 MB and a VM a few GBs. That’s almost nothing.

    Do I use QEMU/KVM virtual machines or LXC/LXD cointainers? Performance wise QEMU emulating the host architecture should be the way to go, right?

    LXC containers are way more lightweight than VMs. I depends on what you want to do. Docker and a file server work better in a VM so far but Pi-hole and Jellyfin run perfectly in a container.

    I shy away from running all services as Docker on the same machine for backup/restore purposes and rather have VMs per service. Is there anything wrong with this approach?

    I would go for LXC first. If that isn’t possible or too cumbersome I would try docker (in a VM) next and one-VM-per-service last as they need the most resources.

    I’d love to keep NextcloudPi (because it’d make it easy to migrate settings and files) and there’s an LXD container for it. Would you recommend doing a switch to Nextcloud AIO instead?

    Sorry, no idea.

    I’ve equipped the Deskmeet X300 with a WiFi card and antennas. AFAIU trying to use WLAN instead of LAN will create some trouble. Has anyone running Proxmox on a machine with WLAN insteal of LAN access successfully?

    I would always try to connect it to LAN.

    I’m aware that Proxmox comes with a firewall, but I don’t feel very confortable using a software firewall running on the same machine that hosts the virtual machines. Is this just me being paranoid or would you recommend putting a hardware firewall between the internet access and the Proxmox server?

    No idea. I wouldn’t mind a firewall container. If something breaks through you are fucked one way or the other. The firewall in your router isn’t much different than any other.
    You should always go for Wireguard or another VPN to access your network from the outside.

    What else should I think of, but haven’t talked about/asked yet?

    Helper scripts for beginners: https://community-scripts.github.io/ProxmoxVE/
    Just give them a look.

    And it seems you are ignoring Proxmox’ LXC. They are one of main reasons to pick that software.

    Edit: As an additional note: I ran about 6 or 7 VMs on a mini PC (Intel N100) with 16 GB RAM. RAM was almost used up and the cpu was at ~15 %.
    I then switched mostly to LXC and only one VM. The cpu was now at ~1% and RAM usage went down to 3 GB while still providing the same services as before.
    The power of containers, baby! :D







  • It seems a move that’s not only intended to help Ukraine repel Russia, but end defense dependence on the US, on the premise that the partner no longer is reliable.

    Which is a good thing IMO. From this side of the large pond the USA looks more and more like a bully on par with Russia and China.

    East of UA is fucked for decades but Russia is fucked too. Both countries burnt through quite a lot of their arsenal from Soviet times and struggle to keep their troops supplied. UA has it easier because the European countries help with weapons, munitions and other equipment.

    Unfortunately Russia can sustain the current attrition rate by another 5-10 years before the situation becomes truly unbearable for them. My fear is that by then the UA might have collapsed.

    The best case scenario is that Putin dies shortly and his successor ends this stupid war.




  • Is SSD really necessary? Everything I search up says SSDs have worse retention than HDD in cold storage. A couple TB of HDD is pretty cheap these days, and seems like a better cold storage option.

    SSDs are by design less susceptible and more robust. No moving parts and able to work in much harsher conditions than hdds will ever be able to. The standard set by JEDEC requires every consumer ssd to have a 1 year data retention while powered off at 30 °C (I think). That’s the minimum it has to archieve but usually they are better than that. Do not buy the cheapest thumb drives because they contain the all the crap that wasn’t good enough to make ssds from it.
    Btw you need to fire hdds up regularly too or the motor gets stuck. I think every 3-6 months was the recommendation.

    Yes, so now I’m thinking a rotation cycle. About every 5 years replace the drives with new ones, copy over all data.

    Don’t make it flat every 5 years. Let a software monitor the SMART values of the drives and send notifications if the values indicate an increased change of a dying disc/ssd.

    Does this matter if I have a SATA->USB cable stored with it?

    Those are the first that fail, followed by the usb controller chip in the tray. Keep it as simple as possible. Removable trays are probably the best way but I’m not sure how much wear they can take.

    Do not buy 2.5" drives. This class will die out soon™. There were no new hdds introduced in years and ssds are often replaced by M.2 ones because of the faster connection.


  • Printing the photos won’t help much. After 20 or so years they are all discolored. You can’t prevent that.

    I think SSDs might be the best storage medium for you. Consumer-grade ssds have a 1 year data retention when powered off. That means at least once per year you have to turn it on and copy the data around one time to refresh the cells. This way it’ll probably last several 100 years.

    You can’t exactly make it fool-proof. Outside people will never know what you did to create your backup and what to do to access it. Who knows if the drives file system or file types are still readable after 20 years? Who knows if SATA and USB connectors are still around after that time?
    For example it is very likely that SATA will disappear within the next 10-15 years as hdds are becoming more and more an enterprise thing and consumers are switching to M.2 ssds.


  • Btrfs and zfs are self-healing.

    You can make a script to check for errors and autocorrection yourself but that needs at least a second hdd. On both drives are the same data and a file or database with the checksums of the data. The script then compares the actual checksums of the two copies and the db checksum. If they match -> perfect. If they don’t match the file where there are two matching checksum is the good one and replaces the faulty one or corrects the db entry, whichever is defect. That’s it. It doesn’t have to be more complicated.




  • For local backups it depends on what you want to have:

    • The cheapest option is a usb or thumb drive. But you have to regularly plug it in and copy your backup on it.
    • The lazy option is to buy a NAS and configure a backup job that regularly creates a backup. Versioned, incremental, differentials and full backups are possible as is WORM to add a bit of extra security. You can configure a NAS to only turn on specified times, do a backup and then turn off again. This will increase protection against encrypting malware. WORM also helps in this case.
      Or just let it run 24/7, create backups every hour and install extra services on it like AI powered image analysis to identify people and objects and let it automatically tag your photos. Cool stuff! Check out QNAP and Synology or build a NAS yourself.
      A NAS can also be configured to present its content in a LAN by itself. Any computer will automatically connect to it if the access isn’t secured by user/password or certificate.

    I recommend buying a NAS.