• 0 Posts
  • 125 Comments
Joined 23 days ago
cake
Cake day: February 5th, 2025

help-circle

  • Prometheus and Grafana.

    Your lemmy instance should have an endpoint for prometheus metrics; https://domain.com/metrics.

    Install prometheus (in a docker container) and setup lemmy as a target. From within docker, create a new network called prometheus and add your prometheus container and lemmy to it. Then using the data that prometheus pulls, you can setup a nice looking dashboard in Grafana like this: https://i.xno.dev/4nLNK.png

    The dashboard above is for my private DNS DoH stub resolver. You’ll have to make the dashboard yourself because there are no public dashboards for lemmy.

    This option is preferable because it all works with docker: https://i.xno.dev/cWjtR.png

    And this is what the prometheus config should look like;

    [xanza@dev prometheus]$ cat *.yml
    #prometheus.yml
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:
      - job_name: 'docker'
        file_sd_configs:
          - files:
            - targets.yml
    #targets.yml
    - targets: ['blocky:4000']
      labels:
        job: blocky
        __metrics_path__: /metrics
    

  • Xanza@lemm.eetoSelfhosted@lemmy.worldBest Reverse Proxy for Cloudflare
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    Caddy. Hands down. No question.

    Everything else works fine. Caddy works fine as well, but it’s also super easy.

    I heard it’s insecure to self host sites without Cloudflare because you’re exposing your ip address and leaving yourself vulnerable

    There’s a lot more to it, and this is only a small part of it, but yes. This is technically true.

    but is it really bad to self host without Cloudflare?

    Cloudflare is nice to have, but it honestly sucks. I run a private dns stub resolver with my own blocklists (because I don’t trust anyone else to do it) and I have Google DNS, Cloudflare DNS, and a few other DoH resolvers as the upstream source. My stub resolver is set to send requests to all the upstreams at once, and to take the results of the one that responds first. Tracking through prometheus shows that Cloudflare has not once (!) had its results chosen because its average RTT is 700ms. Everyone else is in the sub 100ms range.

    Cloudflare was cool until it got popular.



  • I’ve read a lot about using a VPS with reverse proxy but I’m kind of a noob in that area. How exactly does that protect my machine?

    So you’re not letting people directly connect to your server via ports. Instead, you’re sending the data through your reverse proxy. So let’s say you have a server and you want to server something off port :9000. Normally you would connect from domain.com:9000. With a reverse proxy you would setup to use a subdomain, like service.domain.com. If you choose caddy as your reverse proxy (which I highly recommend that you do) everything is served from port :443 on your proxy, which as you might know is the default SSL port.

    And do I understand correctly that since we’re using the reverse proxy the possible attack surface just from finding the domain would be limited to the web interface of e.g. Jellyfin?

    I wouldn’t say that it decreases your attack surface, but it does put an additional server between end-users and your server, which is nice. It acts like a firewall. If you wanted to take security to the n^th degree, you could run a connection whitelist from your home server to only allow local and connections from your rproxy (assuming it’s a dedicated IP). Doing that significantly increases your security and drastically lowers your attack vector–because even if an attack is able to determine the port, and even your home IP, they can’t connect because the connection isn’t originating from your rproxy.

    Sorry for the chaotic & potentially stupid questions, I’m just really a confused beginner in this area.

    You’re good. Most of this shit is honestly hard.




  • Love Lowend. Just grabbed this deal from massiveGRID. Never heard of them, but I took a chance;

    4 Shared Intel Xeon CPU vCores
    8 RAM DDR4 ECC Registered (GB)
    256 Primary High Availability SSD Storage (GB)
    20 TB Guaranteed Internet Traffic
    1 IP Addresses
    

    I paid $141.28 for 3 years, and replied on their forum post for Lowend and they added 1 extra year of service for free, and activated lifetime pricing. So it works out to be about $2.95/mo which is a damn great price. The only real drawbacks are the network is 1 Gbps shared** and no IPv6 (they’re adding it over the next several weeks looks like).

    **speedtest;

    [root@dev ~]$ speedtest --secure
    Retrieving speedtest.net configuration...
    Testing from Massivegrid (xx.xx.xx.xx)...
    Retrieving speedtest.net server list...
    Selecting best server based on ping...
    Hosted by Wnet (New York, NY) [0.09 km]: 2.429 ms
    Testing download speed................................................................................
    Download: 1028.91 Mbit/s
    Testing upload speed......................................................................................................
    Upload: 997.58 Mbit/s
    

    So not absolutely mindblowing, but you seem to get the full 1 Gbps, which is great. I contacted support and they’ll be offering VDS plans soon with access to higher than 1 Gbps speeds. Super happy so far.



  • So, docker is a viable solution, but since you’re a fullstack and will likely add more shit than you can imagine in the future, you might as well setup a proper solution.

    Check out Proxmox. It’s a management platform that allows you to run containers and just about everything else you need for self-host. In addition to that, I recommend getting a very small VPS with a domain to reverse proxy your services if you want. I highly recommend caddy2 for this as it does rproxy and even ssl seamlessly.

    I’m on a shitty 5G internet at home, so VPS seems like the way to go but with who?

    Considering you have a poor internet connection, you’d want to keep as much locally as possible. You’re not going to be able to stream HD movies with shitty internet if you host your media on a remote server, but if you rely on a local wifi network, it’s fine. You won’t have remote access to your movies (I mean you can, but like you said, shitty internet) it’s not going to be awesome. Other services like your matrix server would be fine, but since you’re self-hosting, might as well host them at home, too. Matrix isn’t exactly resource heavy and doesn’t require a shit ton of upload to make usable.

    If I’m torrenting, do I need to be careful which hosts I choose so I don’t get copyright pinged?

    If you’re on 5G, and you torrent, you’ll be found out almost immediately, even with a VPN. I highly recommend a seedbox. Download to the seedbox, then use rclone or something to grab the files to your local NAS cluster (in proxmox) then stream the video’s locally.

    Is there a good guide for securing and hardening my server?

    I always recommend 2 things when dealing with *nix servers;

    1. Run SSH from a non-standard port and drop connections on port 22.
    2. Only open ports you’re using.

    IMO this is really the only hardening you need, especially if you’re working with rproxy and the ports only have to be opened locally or tunneled.




  • my current ISP refuses to provide me a static IP

    So then use dynamic dns? HurricaneElectric offers DynDNS now and it’s great. You can update it right over curl if you want. I have it mapped to a cli function;

    ~\downloads
    ❯ ddns
    HTTP/1.1 200 OK
    Cache-Control: no-cache, must-revalidate
    Content-Length: 18
    Content-Type: text/html
    Date: Tue, 25 Feb 2025 09:24:18 GMT
    Email: DNS Administrator <dnsadmin@he.net>
    Expires: Wed, 25 Feb 2026 09:24:18 GMT
    Server: dns.he.net v0.0.1
    
    nochg {ip}
    

  • However, I also read about unbound in the Pi-Hole guides. I was curious if this was to prefer over cloudflared?

    Many people advocate for Cloudflared as a tunneling solution, but it’s not a one-size-fits-all tool. Personally, I avoid it. Your VPS already functions as a firewall for your connection. Using Tailscale is also self-host and avoids reliance on third-party services like Cloudflare while maintaining security and the same functionality.

    For DNS privacy, I prefer odoh-proxy, which enables your VPS to act as an oDoH (Oblivious DNS over HTTPS) proxy for the cloudflare network. While oDoH introduces a slight latency increase, it significantly enhances privacy by decoupling query origins from content, making it a more secure option for DNS resolution. So you would be able to set your DoH resolver to your domain (https://dns.whatever.com/dns-query) and it would forward the request to cloudflare for resolution, and then back again.

    As for Pi-Hole, its utility has diminished with the modern alternatives like serverless-dns. It allows you to deploy RethinkDNS resolver servers on free platforms, handling 99% of security concerns out-of-the-box. The trade-off is a loss of full custody over your DNS infrastructure, which may matter to some users but is less critical for general use cases.

    Lastly, using consumer VPNs like Mullvad to proxy connections often introduces unnecessary complexity without meaningful security gains. While VPNs have their place they can really overcomplicate setups like this and rarely provide substantial privacy benefits for services like DNS.


  • So that would be a limitation of whichever filesystem you use. I’ve not personally done it, but this reddit user uses a CEPH cluster to be able to hotplug storage into a volume. But doing just that gives you no redundancy, so you would have to do a little research into how to set it up in whichever way would be best for you, but it looks like using the CEPH cluster is what you’re looking for.



  • The easiest solution would be to wipe Windows and replace it with Proxmox–which is an actual server solution. Then using Proxmox you can setup your other services from within Proxmox using docker or LXC containers.

    There’s no real need to get crazy with it. From there everything is controlled by Proxmox via containers. You can easily setup Jellyfin/Plex, *arr stack, HomeAssistant, Frigate, and even your NAS. You can then import your configurations and for your NAS (using TrueNAS or whichever you’d like–Proxmox comes with its own NAS solutions) you’d be able to expose it to your existing shares. It comes with the advantage of a forward moving server setup that’s not “future proof” but future resistant. Proxmox is an actively developed and excellent server architecture. Although not officially, it can even expose your GPU to your containers so transcoding with Jellyfin/Plex should work just fine.




  • Crazy how that doesn’t at all even address the problem of subtitle sync!

    As I said, this isn’t even an issue with Jellyfin. It’s an issue with the device that’s playing the media–your television (or chromecast). This workaround makes an exact copy of the internal subs, and dumps them to an SRT which allows your television (or chromecast) to play the internal subtitles as external subtitles…

    It has nothing to do with subsync, it’s not syncing subs. There are no “mistakes” because you’re pulling the internal subs exactly as they are internally, externally…