

Ordered two drives from them, came in very well packaged and even included the PWDIS adapter. Very good deals. Could throw the box across the yard and the drives would probably survive.
Ordered two drives from them, came in very well packaged and even included the PWDIS adapter. Very good deals. Could throw the box across the yard and the drives would probably survive.
As a starting point. Are there any hardware recommendations for a toy home server?
Whatever you already have. Old desktop, even old laptop (those come with a built-in battery backup!). Failing what, Raspberry Pis are pretty popular and cheap and low power consumption, which makes it great if you’re not sure how much you want to spend.
Otherwise, ideally enough to run everything you need based on rough napkin math. Literally the only requirement is that the stuff you intend to run fits on it. For reference, my primary server which hosts my Lemmy instance (and emails and NextCloud and IRC and Matrix and Minecraft) is an old Xeon processor close to a third gen Intel i7 with 32GB of DDR3 memory, there’s 5 virtual machines on it (one of which is the Lemmy one), and it feels perfectly sufficient for my needs. I could make it work with half of that no problem. My home lab machine is my wife’s old Dell OptiPlex.
Speaking of virtual machines, you can test the waters on your regular PC by just loading whatever OS you choose in a virtual machine (libvirt if you’re on Linux, VirtualBox or VMware otherwise). Then play with it. When it works makes a snapshot. Continue playing with it, break it, revert to the last good snapshot. A real home server will basically be the same but as a real machine that’s on 24/7. It’s also useful to test things out as a practice run before putting them on your real server machine. It’s also give you a rough idea how much resources it uses, and you can always grow your VM until it fits and then know how much you need for the real thing.
Don’t worry too much about getting it right (except the backups, get those right, verify and test those regularly). You will get it wrong and eventually tear it down and rebuild it better what what you learn (or want to learn). Once you gain more experience it’ll start looking more and more like a real server setup, out of your own desire and needs.
I feel like a lot of the answers in this thread are throwing a lot of things with a lot of moving parts: Unraid, Docker, YunoHost, all that stuff. Those all still require generally knowing what the hell a Docker container is, how to use them and such.
I wouldn’t worry about any of that and start much simpler than that: just grab any old computer you want to be your home server or rent a VPS and start messing with it. Just pick something you think would be cool to run at home. Anything you run on your personal computer you wish was up 24/7? Start with that.
Ultimately there’s no right or wrong way to do things. It’s all about that learning experience and building up that experience over time. You get good by trying out things, failing and learning. Don’t want to learn Linux? Put Windows on it. You’ll get a lot of flack for it maybe, but at the very least over time you’ll probably learn why people don’t use Windows for server stuff generally. Or maybe you’ll like it, that happens too.
Just pick a project and see it to completion. Although if you start with NextCloud and expose it publicly, maybe wait to be more comfortable with the security aspect before you start putting copies of your taxes and personal documents on it just in case.
What would you like to self host to get started?
No, because it’s a balancing act. There’s fraud everywhere, it’s just how things are. It’s not worth spending more than it gets back in the name of moral purity.
The allegations of widespread fraud usually have an ulterior motive other than cutting down fraud. It’s usually about the group of people needing the service as a whole and demonizing them with fraud allegations to cut down important social services. Nobody ever talks about banking fraud, stocks fraud, even when done by the literal president. It’s always poor people on welfare programs, food stamps, healthcare that are somehow “the problem”.
I couldn’t care less about poor people not declaring the 10h of work they managed to find, it’s literally impossible to survive on food stamps and welfare without doing undeclared work and if you do declare it you just get penalized more than you earned. It’s a system designed for you to not escape out of.
It doesn’t have to be Mastodon or a social platform. It could just be a news/blog kinda deal that happens to support ActivityPub and people can subscribe to it to get it on their feeds.
There hasn’t been a history of behaviour resembling that of the ideals of Nazis from Felix, especially not enough to say that he partakes in those ideologies. Thankfully his "dark humour " phase ended years ago and he isn’t doing these things anymore, so completely estranging him from anything for it is quite extreme, especially when I have seem some of this sentiment on Lemmy myself. Nor do I think he’s a horrible person for edgy comments and actions that most of us have definitely done one way or another on the Internet.
That. He would have started YouTube at 20 and the guy is now 35. That would have happened when he was 28.
People change, people learn. That one in particular hit him hard and probably led to a lot of self reflection and all that stuff.
We have actual nazis to deal with that actually think it’s a good idea. There’s a huge difference between a bad dark joke and actually supporting facism. How one responds after such an incident matters a lot.
Meanwhile Elon did a literal nazi salute and isn’t even denying it nor apologizing and doubling down on it.
I had my share of hitler jokes, but they were told on a context when it was seen as poking fun at a solved issue of the past in a very progressive area, when nobody thought we’d be dumb enough to witness facism ever again. Context and meaning are both very important before labeling someone for life.
It does, I wrote it in corrupted text for a reason, but if you want something functional you can use it and then see how it set it up for you and still go set up the rest of the services yourself.
When I switched to Arch, it used the Arch Install Framework, that predates even pacstrap
, and I still learned a fair bit. Although the now normal pacstrap
really doesn’t hide how the bootstrapping works which is really nice especially for learning.
Point is mostly if OP is too terried they can test the waters with archinstall (ideally in a VM).
I DONT want to build a system from the ground up, which I expect to be a common suggestion.
Arch kind of is building from the ground up, but without all the compiling and stuff. It’s really not as hard as it sounds especially if you use a̶r̴c̷h̴i̵n̵s̴t̷a̶l̷l̵ and you do get the experience of learning how it all fits together through the great ArchWiki.
That said one can learn a lot even on Debian/Ubuntu/Pop_OS. I graduated to Arch after I felt like apt
was more in my way than convenient and kept breaking on me so I was itching for a more reliable distro. But for stuff like managing systemd services and messing with Wayland, definitely doable on a Debian/Ubuntu/Pop distro. Just use the terminal more really, and it’ll come slowly through exposure.
It works so well, if you stretch a window across more than one monitors of different refresh rates, it’ll be able to vsync to all of them at once. I’m not sure if it’ll VRR across multiple monitors at once, but it’s definitely possible. Fullscreen on a single monitor definitely VRRs properly.
With my 60+144+60 setup and glxgears stretched across all of them, the framerate locks to something between like 215-235 as the monitors go in and out of sync with eachother, and none of them have any skips or tears. Some games get a little bit confused if the timing logic is tied to frame rate, but triple monitor Minecraft works great apart from the lack of FOV correction for the side monitors.
This is compositor dependent but I think most of the big compositors these days have it figured out. I’m on the latest KDE release with KWin.
It works perfectly, I have a 60Hz, 144Hz VRR HDR, and 60Hz.
This is one of the use cases where Wayland shines compared to Xorg.
but I’m curious if it’s hitting the server, then going the router, only to be routed back to the same machine again. 10.0.0.3 is the same machine as 192.168.1.14
No, when you talk to yourself you talk to yourself it doesn’t go out over the network. But you can always check using utilities like tracepath
, traceroute
and mtr
. It’ll show you the exact path taken.
Technically you could make the 172.18.0.0/16 subnet accessible directly to the VPS over WireGuard and skip the double DNAT on the game server’s side but that’s about it. The extra DNAT really won’t matter at that scale though.
It’s possible to do without any connection tracking or NAT, but at the expense of significantly more complicated routing for the containers. I would do that on a busy 10Gbit router or if somehow I really need to public IP of the connecting client to not get mangled. The biggest downside of your setup is, the game server will see every player as coming from 192.168.1.14 or 172.18.0.1. With the subnet routed over WireGuard it would appear to come from VPN IP of the VPS (guessing 10.0.0.2). It’s possible to get the real IP forwarded but then the routing needs to be adjusted so that it doesn’t go Client -> VPS -> VPN -> Game Server -> Home router -> Client.
And then he’s gonna whine about a “refugee problem”
You absolutely can if you want to. Xen have been around for decades, most people that do GPU passthrough also kind of technically do that with pure Linux. Xen is the closest to what Microsoft does: technically you run Hyper-V then Windows on top, which is similar to Xen and the special dom0.
But fundamentally the hard part is, the freedoms of Linux brings in an infinite combination of possible distros, kernels, modules and software. Each module is compiled for the exact version of the kernel you run. The module must be signed by the same key as the kernel, and each distro have its own set of kernels and modules. Those keys needs to be trusted by the bootloader. So when you go try to download the new NVIDIA driver directly from their site, you run into problems. And somehow this entire mess needs to link back to one source of trust at the root of the chain.
Microsoft on the other hand controls the entire OS experience, so who signs what is pretty straightforward. Windows drivers are also very portable: one driver can work from Windows Vista to 11, so it’s easy to evaluate one developer and sign their drivers. That’s just one signature. And the Microsoft root cert is preloaded on every motherboard, so it just works.
So Linux distros that do support secure boot properly, will often have to prompt the user to install their own keys (which is UX nightmare of its own), because FOSS likes to do things right by giving full control to the user. Ideally you manage your own keys, so even a developer from a distro can’t build a signed kernel/module to exploit you, you are the root of trust. That’s also a UX nightmare because average users are good a losing keys and locking themselves out.
It’s kind of a huge mess in the end, to solve problems very few users have or care about. On Linux it’s not routine to install kernel mode malware like Vanguard or EAC. We use sandboxing a lot via Flatpak and Docker and the likes. You often get your apps from your distro which you trust, or from Flathub which you also trust. The kernel is very rarely compromised, and it’s pretty easy to cleanup afterwards too. It’s just not been a problem. Users running malware on Linux is already very rare, so protecting against rogue kernel modules and the likes just isn’t in need enough for anyone to be interested in spending the time to implement it.
But as a user armed with a lot of patience, you can make it all work and you’ll be the only one in the world that can get in. Secure boot with systemd-cryptenroll using the TPM is a fairly common setup. If you’re a corporate IT person you can lock down Linux a lot with secure boot, module signing, SELinux policies and restricted executables. The tools are all there for you to do it as a user, and you get to custom tailor it specifically for your environment too! You can remove every single driver and feature you don’t need from the kernel, sign that, and have a massively reduced attack surface. Don’t need modules? Disable runtime module loading entirely. Mount /home
noexec. If you really care about security you can make it way, way stronger than Windows with everything enabled and you don’t even need an hypervisor to do that.
Paste the URL in the search bar, it’ll fetch it locally on your instance and get you there. No need for link guesswork to find it on a particular instance.
If it’s spam it should have been deleted anyway by the admins as well, but sometimes this doesn’t federate correctly.
With great power comes great responsibility.
Nobody’s ever gonna trust the US ever again after Trump. Guy just thinks he can bully entire countries to his will.
It’s a showing of everything that’s wrong with american exceptionalism…
I found the setting to turn that off in the settings so that checks out with what you said. Good to know!
Why is this always the argument that comes up? It’s like if foreign people came by thousands to post the 9/11 attacks on american media to test the free speech. Most would take it down, some might stay up, but it’s ultimately still very disrespectful and upsetting for a lot of people.
You can enjoy a heavily moderated platform for what it’s good at. I use rednote for my cat, food and art content and enjoy the cultural exchange. There are better suited apps in general for free speech and political debate. I’m tired of politics invading every platform, so it’s been rather nice in that aspect. For what I want to use that app for, I’m perfectly fine with the CCP’s rules, even if I disagree with some aspects of the CCP.
Free speech is important, but we don’t need it literally everywhere.
It’s not impossible, been running my own email server for about 10 years and I inbox pretty much everywhere. I even emailed my work address and straight to inbox. I do have the full SPF, DKIM and DMARC stuff set up, for which I get notices from several email provides of failed spoof attempts.
Takes a while and effort to gain that reputation, but it’s doable. And OVH’s IPs don’t exactly have a great reputation either. Once you’re delisted from most spam databases / old spam reputation is expired, it’s not that bad.
Although I do agree it’s possibly one of the hardest services to self host. The software to run email servers is ancient and weird, and takes a lot to set up right. If you get it wrong you relay spam and start over, it’s rough.