

You can secure boot most distros these days. It’s not new either. Depends on who it what their anchor is, and if it’s more limited than just secure boot being active.


You can secure boot most distros these days. It’s not new either. Depends on who it what their anchor is, and if it’s more limited than just secure boot being active.


Compressing it with handbrake will probably not look worse. MPEG2 used in DVD is notoriously inefficient by today’s standards. Depending on the codec selected, it’ll be a fraction of the size with no visible differences.
Unless you mean to keep the DVD structure and playability in DVD players (including menus and everything), but I don’t think handbrake can do that.


If you just want file sync, the obvious option is SyncThing. It’s established and highly regarded.


That is very unlikely to change by 2027 though.
I think it’s about printers being required by law to (covertly) watermark copies as such, and make it somewhat traceable. This is supposedly to prevent duplication of protected works (books?) but also to prevent someone just using it to print money (badly, probably).
To my knowledge all major brands incorporate something like this.


The one point that has basically been solved is NAT traversal. Thanks to Wire guard, Tailscale and the like. The relevant parts are open source and can be used basically as a library.
I wish there was one. Thunderbird has given me nothing but issues. KMail is lacking basic features, as does evolution. I obviously haven’t tried them all, but this already took long enough and I’m tired of it.


First my context: I’m also running multiple Proxmox hosts (personal and professional), and havea paperless-ngx instance (personal/family). I tried Firefly, but the effort required to get it to a point where it would be if use to me was too high, so I dropped it. Haven’t used n8n.
For the setup I’d just use the Proxmox community scripts, if you haven’t heard of them. Makes updates trivial and lowers the bar to just trying something to basically zero.
Paperless-ngx I actually use, cause it means I can find something when i need it. It’s all automatically ocr’d and all you have to do is categorize them. With time, it’ll learn and do this for you. You can (manually) setup your scanner to just directly upload files to the “consume” folder and it just works. PC/server power is near irrelevant, it just means OCR takes slightly longer, otherwise it’s a web server. You can run this just fine on a raspberry pi.
I don’t have any real automation setup, so I can’t really comment on that. My advice is to just install it, see what it does and how it feels. Try to anticipate if and how much automation you need. Many aspects of all this are of the “setup once” variety, where once it’s working, you don’t have to touch it again. Try to gauge if the one time effort is worth it for you, then go from there. As I said, it was fine for paperless for me, but not for Firefly (but I might need to revisit this).
On Linux, running Jellyfin through docker with GPU acceleration works fine, yes. But you need some options/flags to pass access to the GPU to the inside of the container. Guides and/or docker tutorials exist and should contain that, as that’s basically the default setup these days.
As for Bazzite and Docker (I just checked), no it isn’t part of the base image and you can’t easily install it. That’s the downside of an immutable distro. I think podman is available, which is compatible and FOSS, but there may be caveats to using that. There is a bazzite version called bazzite-dx intended for developers, so that one would probably work fine for you out of the box. There shouldn’t be any real downside to using that compared to the mainline image, apart from being slightly larger cause all dev tools are installed, but do check that. My practical experience with Bazzite is limited.
My real recommendation is: just try it. Slap in a small/cheap SSD (~20 bucks) instead of whatever you got in there now, install CachyOS and try it out. Then install Bazzite and try it out. By “Try it out” I do mean setting up a copy of or a test-install of your required services (arr stack, jellyfin, …), to see if everything is as you’d expect. Possibly install more distros to try them out, then make up your mind and actually fully migrate, or if it doesn’t work out go back to your currently installed drive. Installing a linux distro takes like 10 minutes these days, then play around with however long you need. Since you already have it narrowed down to only 2 options anyway, that is most likely the best solution.
There’s a lot of well meaning but not too well informed advice in here. Since one of your goals is gaming, stay away from Mint. It can be made to work (well), but you have to get there. It’s basically the recommendation people gave for decades, but there have been massive improvements through many distros while mint just kinda stood still. There’s still some things they do rather well though.
CachyOS will do what you want it to, and it is what I switched to like 8 months ago. It isn’t maintenance heavy at all if you don’t want it to be. I think I had to intervene once since I started using it, but that intervention was necessary or it wouldn’t have booted after updates. The official updater will tell you when that’s the case, as it lists critical news like that. Otherwise it just works, and it’s pre-configured and optimized for gaming. Under the hood it’s basically Arch, just without the fiddling of getting it to a usable state. Because of that they’re is also an enormous amount of information out there (Arch wiki) on how to do stuff.
Bazzite is a stark contrast in many ways as it’s an immutable distro, but also pre-configured and optimized (maybe not quite as much as CachyOS). It will also do what you want just fine. It is relatively “safe” due to the immutability, and updates are much rarer (and by definition always whole system updates). I don’t know exactly how you’d run your services, but assuming they are dockerized or similar that should be just fine, but please do some searching before if it does contain what you need in the base image (presumably docker and docker compose).


I’ll probably give this a try, thanks!
But I’m confused about your explanation: you say you didn’t wanna contribute to the existing project at you didn’t know dart/flutter. Then you end up creating your project from scratch, using dart/flutter to learn dart/flutter. Why not just contribute to the existing project, or fork it, instead of reinventing the (same) wheel?


DuckDNS had been unreliable when I used it, but it’s been a while. I swapped over to desec.io but their signups aren’t always open. Can highly recommend them though, and they offer many paths to update the IP, including DynDNS(2) protocol or just ddclient.
Also works with certbot for Let’s encrypt certificates using dns challenge.


Never run something like Vaultwarden with unencrypted traffic. Throwing in a self signed cert is basically free insurance. You never know when even in your “trusted network” something starts listening in. Just why risk it?


Yes, but it isn’t available (yet). The pebble 2 duo does not, but it has already shipped. I don’t know how many are still available and/or will be made.
Currently the app also has zero support for anything health-related, including sleep. If that will be fixed by the time the pt2 is shipping, who knows. This is probably not a huge problem for op, as he’s explicitly searching for a watch without smartphone reliance.
Even in the old app and on the old pebble watches, anything health related was an afterthought at best, and it also isn’t a focus of it officially. The new ones are using the same OS, so are incredibly similar. Which is generally a good thing, but also includes the lack of features related to anything “health”.


The modern Pebble has no heart rate sensor, and generally no useful exercise monitoring.


Ssh over Internet is fine as long as it’s properly setup (no password auth, root not allowed, etc.). Obviously a VPN is even better.


You can try to refund anyway, and explain the reason in the text box. Has worked for me in the past. There are actually people reading these as far as I can tell. If it didn’t work, all it cost was life 3 minutes.


Or if you have separated your devices into subnets/VLANs. Which becomes more important as your get more hardware that you don’t really trust.


Multiple times a day and many times a day isn’t necessarily the same thing. Also just having a 1-2 hours long timeout might still be a viable option preventing repeated spin ups, but still allowing spin down during longer unused periods.
While probably not worth it for your particular case, it might will be for others reading this. Ideally, one could observe the access patterns for a while a find a suitable timeout setting.
Maybe look onto OwnCloud. That’s the project NextCloud was forked from many years ago. It’s very much still around and had a very different philosophy, a much more minimalistic approach with focus on stability. That’s actually the reason the people behind NextCloud had to fork it, cause all their additional features (bloat) wasn’t accepted upstream.