The AIO mastercontainer seems to do fine on Apache, but when I had it dockerized myself, I used nginx and it was fine. I really think the main point is using postgres and redis. Mysql isn’t great and sqlite is terribad in the stack.
The AIO mastercontainer seems to do fine on Apache, but when I had it dockerized myself, I used nginx and it was fine. I really think the main point is using postgres and redis. Mysql isn’t great and sqlite is terribad in the stack.
You cover a lot of topics in each episode. Maybe cut them down to get a shorter episode, and budget the time to expand a couple of the more interesting ones. Use the more in-depth topics to drive a Premium, no-ads channel.
I look at Linux Unplugged as way too long, but really they don’t cover very much in an episode. They spend more time reading their boosts and usually I just skip out at that point. But I guess that’s where they get paid from, so I get it.
I’m not sure that the Linux landscape is a place where you’re going to pay for the time of running a podcast, but as long as you enjoy helping people with bringing them information and pointing them at new things, at least you’ll be getting that satisfaction.
Do you ever send mails to Gmail and Office365
All the time, never had an issue. I get dmarc reports constantly since I set my dmarc to notify, not just failed, but I’ve never seen PTR checked on Microsoft or google. It passes SPF and DKIM (presumably spam but you don’t get a report for that) and they let it through. I used to think it was because I’ve had most of my domains for a long time, but the couple times I’ve brought a new domain online, they seem to be fine with them.
Now they might be passed because my old domains have never had an issue and they get associated because they come from the same IP?
My ISP would let me set a PTR if I wanted but I haven’t bothered because it doesn’t seem to be an issue.
Selfhost several domains for over 25 years, from home, on a dynamic IP (though it hasn’t changed in a long time) and no PTR records, and I have literally had zero problems with blacklisting or dropped connections. I must live a charmed life, or have set up my DKIM/SPF/dmarc records correctly.
Currently using mailcow-dockerized and it’s lovely.
I’ve listened to a few episodes over the last few months and enjoyed some of the topics, especially the interview with that Nextcloud fellow.
Except for the interview, I do find an hour is more than I can take at once, though. I lean towards Joe Ressington’s “make them want more” half-hour podcasts every week. Just my 2 cents.
They’re determined to Streisand Effect this into the history books, huh?
Oh, and we’re showing all your friends what you watch without you asking for it. And by friends, we mean everyone we leaked your account and payment details to. Twice.
Why the literal fuck anyone has anything to do with Plex at this point is beyond me. They don’t supply anything unique and they abuse you to do it.
Run a proxmox VM with docker services. ZFS snapshots and backups via PBS.
Claire would be pushing people into the showers and complaining that they looked at her funny before she closed the airtight door.
Because apparently there are some really, really dumb fuckers out there, and they make decisions for the rest of us.
I use Pinchflat, but I’ll take Youtube channel feeds instead so it can employ Sponsorblock and cut the commercials, especially for podcasters that use IHR. It then exports an RSS feed for Antennapod to monitor, but I imagine you could just write the episodes to a Navidrome-accessible folder.
Sounds like they give you a bunch of grafana dashboards preconfigured, which is fine. Makes customizing them easy.
deleted by creator
So if I want a new container stack, I make a new Proxmox “disk” in the ZFS filesystem under the Hardware tab of the VM. This adds a “disk” to the VM when I reboot the VM (there are ways of refreshing the block devices online, but this is easier). I find the new block device and mount it in the VM at a subfolder of /stacks, which will be the new container stack location. I also add this mount point to fstab.
So now I have a mounted volume at /stacks/container-name. I put a docker-compose.yml in there and all data that the stack will use will be subfolders of that folder with bind mounts in the compose file. When I back up, that ZFS dataset that contains everything in that compose stack is snapshotted and backed up as a point-in-time. If that stack has a postgres database, it and all the data it references is internally consistent because it was snapshotted before backup. If I restore the entire folder from backup, it just thinks it had a power outage, replays it’s journals in the database, and all’s well.
So when you have a backup in PBS, from your Proxmox node you can access the backups via the filesystem browser on the left.
When you go to that backup, you can choose to do a File Restore instead of restoring the entire VM. Here I am walking the storage for my nextcloud data within the backups, and I can walk this storage for all discrete backups.
If I want to just restore a container, I will download that “partition” and transfer it to the docker VM. Down the container stack in question, blow out everything in that folder and then restore the contents of the download to the container folder. Start up the docker stack for that folder and it’s back to where it was. Alternatively, I could just restore individual files if I wanted.
Yes. So my debian docker host has some datasets attached:
mounted via fstab:
and I specify that path as the datadir for NCAIO:
Then when PBS calls a backup of that VM, all the datasets that Proxmox is managing for that backup take a snapshot, and that’s what’s backed up to PBS. Since it’s a snapshot, I can backup hourly if I want, and PBS dedups so the backups aren’t using a lot of space.
Other docker containers might have a mount that’s used as a bind mount inside the compose.yml to supply data storage.
Also, I have more than one backup job running on PBS so I have multiple backups, including on removable USB drives that I swap out (I restart the PBS server to change drives so it automounts the ZFS volumes on those removable drives and is ready for the next backup).
You could mount ZFS datasets you create in Proxmox as SMB shares in a sharing VM, and it would be handled the same.
As for documentation, I’ve never really seen any done this way but it seems to work. I’ve done restores of entire container stacks this way, as well as walked the backups to individually restore files from PBS.
If you try it and have any questions, ping me.
I run a docker host in Proxmox using ZFS datasets for the VM storage for things like my mailserver and NexcloudAIO. When I backup the docker VM, it snapshots the VM at a point in time, and backs up the snapshot to PBS. I’ve restored from that backup and it’s like the machine had just shut down as far as the data is concerned. It journals itself back to a consistent state and no data loss.
I wouldn’t run TrueNAS at all because I have no idea how that’s managing it’s storage and wouldn’t trust the result.
Say it, NBC, you fucking cowards. Say the forbidden word.
Britain has been using the Troubles for a long time to justify all manner of perverting freedoms and rights. They’re not far behind China in surveillance and suppression.
I do this with a calibre/calibreweb docker stack, and fbreader on my tablet/phone. Unfortunately you need to use g drive for progress sync, but that’s not a huge roadblock.
X 100