Mine is using a network share to transfer files faster than any USB device we have at home.
I take my shitposts very seriously.
Mine is using a network share to transfer files faster than any USB device we have at home.


To delegate the responsibility of securing login data to a company better equipped to deal with it (in theory at least). You can also use an external OIDC provider.


Tailscale. Create an account, put the client on the LAN device, put the client on the remote device, log in on both, you’re done. It bypasses NAT, CGNAT, and the firewall through some UDP black magic fuckery. As long as the router allows outgoing connections, it will work.
If the factory resets cause the router to lose connection to the ISP, though, then nothing will work.
Tailscale Funnel will let you expose a host to everyone on the internet. You’ll need the Tailscale client running on either the Jellyfin host or a reverse proxy pointing to it. Tailscale itself will act as a reverse proxy with TLS encryption, plus a DNS server.
Exposing a service to the internet will always present some risk. You should definitely run your LXCs as unprivileged, unless needed otherwise, to mitigate the potential damage if an attacker escapes the container, or put the services in full virtual machines.
external access
Do you want the Jellyfin server to be accessible from only within your tailnet, or anywhere from the internet?


If you have IPv4 addresses, I guarantee you’re behind at least one NAT gateway. What you need is a Tailscale subnet router, or something equivalent from another service.
In the most basic configuration, the Tailscale client facilitates communication (by using some UDP black magic fuckery) between one host it is running on and another host it is running on that are both connected to the same tailnet (the virtual network between Tailscale hosts). For this purpose, it uses addresses from the 100.64.0.0/10 “shared address space” subnet. These addresses will only be reachable from within your tailnet.
If you want an entire subnet (e.g. your LAN) to be accessible within your tailnet, you need to set up a subnet router. This involves configuring the Tailscale client on a device within the target subnet to advertise routes (tailscale set --advertise-routes=192.168.1.0/24), allowing the host to advertise routes in the admin page (Machines -> … -> Edit routes), and configuring the Tailscale client on external hosts to accept advertised routes (tailscale set --accept-routes).
If you want your servers to be accessible from anywhere on the internet, you’ll need Tailscale Funnel. I don’t use it personally, but it seems to work. Make sure you understand the risks and challenges involved with exposing a service to the public if you want to choose this route.


Rolling release doesn’t mean that no testing is done. All updated packages are tested by maintainers before being released into the official repository. A rolling release simply means that there are no individually marked OS versions and you always get the latest packages.
In contrast, take Debian for example. It uses a point release system with major named versions (e.g. Debian 13 “Trixie”), minor point releases (e.g. 13.1), and security and bugfix patches between those. New feature updates are released only between point releases, and breaking changes are only introduced between major versions. This allows the maintainers to practice a greater amount of care in testing that the packages work well together, but also means that new features are always held back to some extent. This does not happen in a rolling release system. All upstream changes are pulled, tested, and released, regardless of whether a breaking change is introduced.
By its nature, a rolling release distribution will require a greater amount of maintenance. If a package update requires manual intervention, it will be published on archlinux.org. For as long as I’ve been a Linux user, I’ve only seen one package update that made systems temporarily unbootable, and I was saved from that by being a Manjaro user at the time.
But, to answer the question, I usually update my home and work PCs (both Arch) about once every week or two, or as required by a new software or important security update.


It’s less about the concept of a game-centric headset and more about the brands that sell themselves as “We Are Gamers” with angular shapes and RGB out the ass. Steelseries, Razer, Alienware, Aorus, ROG… I’ve had many bad experiences both personally and professionally. The only one I didn’t end up regretting was Logitech G. The G502 mouse is a beast.


I used to own a HyperX Cloud Flight. It’s the best wireless headset I’ve ever tried. It comes with a USB dongle, no Bluetooth. Worked out of the box on Arch. I bought mine before HP infested HyperX, but my sister uses a post-buyout one and she says it’s perfect.
Pros:
Cons:
In general, avoid anything “Gamer”. You’re paying for the brand, not the quality. Even the cheapest “audiophile” headphones are better.
Wireless headsets will always be limited by their internal DAC. Another option is to get a decent wired headset and a dedicated wireless DAC. I currently use a modded Beyerdynamic DT770 and an AKG K-240, and if I need them to be wireless, I clip a Fiio BTR5 to the headstrap and connect it with a short cable.
I’d love to know what an actual moderator would think if you imposed your idea on them.
report bad faith posts
You’re supposed to report posts that break instance or community rules, not whatever you happen to consider to be “bad faith”. You can’t moderate based on intent, only actions, otherwise you’re asking for a thought police where only the popular opinion is permitted to exist.
Besides, even if your instance has disabled downvotes, other instances can still see them.
Depending on your sorting method, downvoted posts will be featured less favorably in list views. You will immediately know that a heavily downvoted post is not worth your attention. Some clients might let you filter displayed posts based on vote counts or up/down ratio.
Downvote and move on. Mute accounts and communities you don’t want to see. Curate your own feed. Simple as.


The issue was ARP-related after all. Since all computers were cloned from the same image, the VMs ended up having the same MAC address, which caused collisions.


I think you need four distinct MAC addresses for this setup, are they all different?
We have a winner!
The classroom computers were mass-deployed using Clonezilla, from a disk image that already had the VM pre-configured. As a result, every VM had the same MAC address. Bridged networking put both hosts and both VMs in the same broadcast domain, which caused collisions in the ARP tables. I randomized the MAC address of one VM and everything suddenly started working.
It’s never been an issue since we’ve never needed to use anything other than the default NAT adapter, so I’ve never even questioned it. I found the solution after plugging the computers directly into an access switch without success, and cross-checking show mac address-table with the MAC reported by the VMs revealed that they were identical.


I checked ip neighbour (it also shows the ARP table, so I assume they’re identical), and it showed REACHABLE and STALE for addresses I could ping, but FAILED for the remote VM’s address. I will check arp -a when I get the chance, though.


I’ll give it a try tomorrow, thanks.
Although I’d still prefer to know why the VMs won’t talk over simple Ethernet.
That’s why you shouldn’t drive a 1969 Mustang project car immediately after getting your licence. You figure it out on a 2003 Honda Civic, then move on to bigger things when you have both the basic knowledge and the willingness and ability to advance your knowledge.
You claim that installing with btrfs failed. Did you look into what the error messages meant? You claim to not know what Flatpak is. Did you look it up?
RTFM is not just a thought-terminating cliché used by elitist wankers. It’s a philosophy you have to live by if you want to play with powerful toys. Look at manuals, the Arch Wiki, Stackoverflow, or ask a clanker. If that’s beyond your abilities at this time, you’ll either have to improve yourself, or surrender for the time and try a more beginner-friendly OS.


Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
Three important factors: