

My gut reaction too. But their readme/faq makes a lot of sound points. Also Nextcloud is one of the main contributors, so you know it’s serious. Also Proton and Ionos (which I admit I’d never heard of, but they seem big)


My gut reaction too. But their readme/faq makes a lot of sound points. Also Nextcloud is one of the main contributors, so you know it’s serious. Also Proton and Ionos (which I admit I’d never heard of, but they seem big)


Tldr: don’t let Perfect be the enemy of good
I don’t know about Codebergs policy on the matter, but morally I think there’s nothing wrong with putting open-source mods for closed-source games on Codeberg.
I always use FOSS software whenever possible, even if they’re lacking in some aspects compared to closed-source alternatives, but have no problems with closed source games.
Games are entertainment, not utility. Games have a short lifecycle compared to utility software. Games are often a one-time experience, and when you’ve finished a game it’s done. (Nobody ever “finished” their use of Notepad). Meanwhile developers gotta eat.
There’s also some precedence for open-source projects that can only be fully accessed with closed source software, like open-hardware using Eagle for PCB and schematic design (before KiCAD truly took off), or Fusion360 for CAD ( FreeCAD development is accelerating though)
Good catch, that licence does not look very Libre =\


Well put, thanks for sharing. I submitted many of the same arguments as you, but not as eloquently or thoroughly :)


“Europe is the American tech sector’s biggest market after the United States itself. It all depends on trust. Trust requires dialogue,” Smith said.
Trust has been destroyed from the top. Trust is easy to loose and hard to gain


Yikes, are those required? Looks very rug-shaped, perfect for pulling things. Or not. Who knows?


Yeah it’s a normal model, but BitWarden is a bit special in that their original server-side implementation was enough of a pain to self-host on a small scale that an alternative implementation Vautlwarden was created. And Vaultwarden became very popular in self-hosted circles. And now many years later BitWarden offers a Lite server which scales down. I think it’s a good thing, just a bit unusual. I’m struggling to think of similar examples.
I’m sure Vaultwarden still funnels plenty of enterprise use of BitWarden, since Vaultwarden users still use official BitWarden client.


Forward thinking venture capital funded companies are getting rarer, hence the question on motivation. Especially the last few years many VC Foss companies have squeezed harder the other way (gitea, Terraform, docker). So all kudos to BitWarden for launching Lite.
What you say a about brand dominance, or brand protection makes a lot of sense. It’s not a good look for them that a large number of people choose to use an unofficial implementation instead of theirs. And should there ever be a catastrophic security issue with Vaultwarden, it would still reflect bad on BitWarden as that kind of nuance (like “unofficial server side implementation”) tend to get lost in reporting. Having more IT workers self-host official version probably also helps pave the way for bringing enterprise-bitwarden to companies.
Valve are a bit of a unicorn though, because they are privately owned. There’s no investors demanding ROI the next quarter, which gives them freedom to think long term.
When Microsoft launched windows8 and the Microsoft Store, Valve took that as an existential threat to their whole business model (the Steam store). Valve feared that Microsoft was trying to position itself like Apple on iOS and Google on Android, where there is only one platform store, and all apps are purchased through the platform store, and the platform store takes that sweet sweet 30% cut. So Valve pivoted to ensure the Steam store would not be obsolete, and give customers a reason to still use the Steam store.
And what they achieved is awesome, for Linux, for Valve and for gamers. But it took nearly a decade, which is a level of patience few companies have.


Wonder what’s the reasoning behind offering this Lite version. I don’t imagine competing with Vaultwarden is very lucrative financially.


To be honest I don’t remember why I set up gitea with MySQL instead of sqlite (or MariaDB), its quite a few years ago. And sqlite would probably be fine for my single-user instance


I just did it not long a ago. Gittea -> Forgejo10 -> Forgejo11 LTS, in Docker. Surprisingly quick, painless and smooth.
(My only issue was not Forgejo, but MySQL. Because the hardware is ancient and Docker compose pulled down a new version of mysql8 at the same time as pulling forgejo. New version of mysql8 didnt support my CPU architecture. Easy fix was to change the label mysql8oraclelinux7 in Docker compose and pull that image. There is a issue with solutions in the MySQL Docker GitHub repo)


Can attest that Folder Sync is excellent. I use it all day (in the background) for two-way sync (notes) and backup of photos videos etc
Though a small PSA on setting up:
I once set up a new share on a new phone with two-way sync, and the app decided to sync the (newer) empty directory to the server (i.e. delete everything) instead of pulling the files from the server to the phone.
Easy fix: Restore notes from backup (step 0: have backups in the first place), then do an initial 1-way sync from server to phone, then change the sync job to two-way.


For jpg’s, no they will not get smaller. Maybe even a smidge bigger if you zip them. Usually not enough to make a practical difference.
Zip does generic lossless compression, meaning it can be extracted for a bit-perfect copy of the original. Very simplified it works by finding patterns repeating and replacing a long pattern with a short key, and storing an index to replace the keys with the original pattern on extraction.
Jpg’s use lossy compression, meaning some detail is lost and can never be reproduced. Jpg is highly optimized to only drop details that don’t matter much for human perception of the image.
Since jpg is already compressed, there will not be any repeating patterns (duplicate information) for the zip algorithm to find.


Do you version your compose files in git? If so, how does that work with the dockGE workflow?


I highly recommend you use Proxmox as the base OS. Proxmox makes it easy to spin up virtual machines, and easy to back up and revert to backups. So you’re free to play around and try stupid stuff. If you break something in your VM, just restore a backup.
In addition to virtual machines, Proxmox also does “LXC containers” , which are system level containers. They are basically a very light weight virtual machine, with some caveats like running the same kernel as the host.
Most self-hosting software is released as a docker-image. Docker is application level containers, meaning only the bare minimum to run the application is included. You don’t enter a docker container to update packages, instead you pull down a new version of the image from the author.
There are 3 ways to run docker on Proxmox:
The “overhead” of running docker inside a VM on the host is so negligible, you don’t need to worry about it.


I had never heard of dockge before, but this sounds like the killer feature for me:
File based structure - Dockge won’t kidnap your compose files, they are stored on your drive as usual. You can interact with them using normal docker compose commands
Does that mean I can just point it at my existing docker compose files?
My current layout is a folder for each service/stack , which contains docker-compose.yaml + data-folders etc for the service. docker-compose and related config files are versioned in git.
I have portainer, but rarely use it , and won’t let it manage the configuration, because that interfered with versioning the config in git.


The article introduction is gold:
In the unlikely case that you have very little RAM and a surplus of video RAM, you can use the latter as swap.


Thanks for sharing! TIL about autofs. Now I’m curious to try NFS again.
What’s the failure mode if the NFS happens to be offline when PBS initiates a backup? Does PNS try to backup anyway? What if the NFS is offline while PBS boots?
EDIT: What was the reason for bind mounting the NFS share via the host to the container, and NFS mounting from NAS to host?
I did the NFS-mount directly in the PBS. (But I am running my PBS as a VM, so had to do it that way)


I run PBS as a virtual machine on Proxmox, with a dedicated physical harddrive passed through to PBS for the data.
While this protects from software failures of my VMs, it does not protect from catastrophic hardware failure. In theory I should be able to take the dedicated harddrive out and put it in any other system running a fresh PBS, but I have not tested this.
I tried running the same PBS with an external NFS share, but had speed and stability issue, mainly due to the hardware of the NFS host. And I wasn’t aware of autofs at the time, so the NFS share stayed disconnected
Yay, v15 is LTS