

ZFS uses the RAM intensively for caching operations. Way more than traditional filesystems. The recommended cache size is 2 GB plus 1 GB per terabyte of capacity. For my server, that would be three quarters of the RAM dedicated entirely to the filesystem.

Right at this moment, I’m rebuilding my homelab after a double HDD failure earlier this year.
The previous build had a RAID 5 array of three 1TB Seagate Barracudas that I picked out of the scrap pile at work. I knew what I was getting into and only kept replaceable files on it. When one of the drives started doing the death rattle, I decided to yank some harder-to-acquire files to my 3TB desktop HDD before trying to resilver the entire array. Guess which device was the next to fail. I could mount and read it, but every operation took 2-5 minutes. SMART showed a reallocation count in the thousands. That drive contained some important files that I couldn’t replace, which were backed up to the (now dead) server. Fortunately
ddrescuemanaged to recover damn near everything and I only lost 80 kilobytes out of the entire disk. That was a very expensive lesson that I’ve learned very cheaply.The new setup has a RAIDz1 pool of 3x 4TB Ironwolf disks (constrained by the available SATA sockets on the motherboard), plus a new SSD for the OS and 16GB RAM (upgraded from literally the first SSD I ever bought and 10GB mis-matched DDR3).
Mounting it was a bit of a dilemma. The previous array was simply mounted to the filesystem from
fstaband bind-mounted to the containers. I definitely wanted the storage to be managed from Proxmox’s web UI and to be able to create VDs and LXC volumes on it. Some community members helped me choose ZFS over LVM-on-RAID5. Setting up the correct permissions wasn’t as much of a headache as last time. I’ve just managed to get a Samba+NFS+HTTP file server and Jellyfin running and talking to each other. Forgejo and Nextcloud will be next.