

His instance and mine, “sh.itjust.works” was federated with lemm.ee


His instance and mine, “sh.itjust.works” was federated with lemm.ee
Mine (Thunder) doesn’t recognize tagging the code block as a specific syntax, it just shows it as preformatted block, with no highlighting.


If you’ve ever had to go through the audit process CAs are subjected to, not violating the compliance controls and ensuring audit compliance is a massive chunk of your attention for a lot of the year.
Can I ask what client you’re using?
You are right to be afraid. I had a similar story, and am still recovering and sorting what data is recoverable. Nearly lost age 0.5-1.5 years of media of my daughters life this way.
As others have said, don’t replicate your existing backup. Do two backups. Preferably on different mediums, spinning disk/ssd eg.
If one backup is corrupted or something nasty is introduced, you will lose both. This is one of the times it is appropriate to do the work twice.
I’ve built two backup mini PCs, and I replicate to them pretty continuously. Otherwise, look at something like Borg base/alternatives.
Remember, 3-2-1 and restore testing. It’s not a backup unless you can restore it.


I have never understood this fork argument. All it takes to make it work is a clear division for the project.
If you want to make something, and it requires modification of the source for a GPL project you want to include, why not contribute that back to the source? Then keep anything that isn’t a modification of that piece of your project separately, and license it appropriately. It’s practically as simple as maintaining a submodule.
I’d like to believe this is purely a communication issue, but I suspect it’s more likely conflated with being a USP and argued as a potential liability.
These wasteful practices of ‘re-writing and not-cloning’ are facilitated by a total lack of accountability for security on closed source commercialised project. I know I wouldn’t be maintaining an analogue of a project if there were available security updates from upstream.


Everything’s a trade off, as you already know. I still use lets encrypt, despite the fact that I know attackers watch CT logs, and they’ll know as soon as I mint a cert.


Also, according to the propaganda model, in developed democratic societies, the propaganda is assumed to be true, and if you’re not on board with that, you’re not part of the debate.


Fair enough, I did assume the target audience was selfhosters based on the question.
As for provider backups - well, you’d hope. But M$ doesn’t do user available backups, so I’d be surprised if that was bundled by the average SaaS provider.


And if you don’t know what database you’re running, how are you backing it up?
If you don’t know what database you’re running, are you bothering to do a full shutdown before backups? Are you doing backups at all…
I haven’t tested the spouse approval factor, but once Radicale is setup, you don’t have to do anything other than create new calendars through a caldav app, or through the web front end.
Android can use DavX to sync if you’re in to foss stuff
I pretty much only use it for tasks and a maintenance calendar, but I’ve had zero problems with it so far


All I need is for them to fix the public collection RSS feed bug where they embed “https,http” in the feed xml if you’re behind a reverse proxy - which breaks parsing


Muppet Treasure Island, and “Spider-Man and his Amazing Friends Ep.16 - A Fire Star is Born!”
On VHS, of course!
and has integration for Oxidized, smokeping, greylog and more


Yes. But also, despite having done it literally thousands of times, I still can’t tell you which way round to put the target and the link name for a softlink on the first go.
My first guess is always
ln -s $NAME $TARGET
No amount of repetition will fix this.
Sounds like you have reason to bump it up the list now - two birds with one stone.
I need to do this too. I know I have stuff deployed that has plaintext secrets in .env or even the compose. I’ll never get time to audit everything. So the more I make the baseline deployment safe, the better.


That’s fair, there’s other angles of observation made available already.
Seeing as you like speculating about cyberpunk, how about if observation is just the initial way to way to sell the drone cloud? Depending on how cheap you can make them, there’s an argument to made for reducing time-to-intercept for low-speed aerial objects.
If you’ve got a bunch of drones overhead already, you could run one in to the path of a kamikaze drone, or if your swarm is even lightly armed, you can extend engagement range and reduce required accuracy with a single buckshot shell to shoot an offending drone down.
If you’re content to prioritize executive safety over public saftey, there’s a lot that can be done.
Drone displays terrify me.


Not to mention, the minute it happens, the government will carpet the skies with observation drones in the name of safety
I was trying to finalize a backup device to gift to my dad over Christmas. We’re planning to use each other for offsite backup, and save on the cloud costs, while providing a bridge to each other’s networks to get access to services we don’t want to advertise publicly.
It is a Beelink ME Mini running arch, btrfs on luks for the os on the emmc storage and the fTPM handling the decryption automatically.
I have built a few similar boxes since and migrated the build over to ansible, but this one was the proving ground and template for them. It was missing some of the other improvements I had built in to the deployed boxes, notably:
I don’t know what possessed me, but I decided that the question marks and tasks I had in my original build documentation should be investigated as I did it up, I was hoping to export some more specific configuration to ansible to the other boxes once done. I was going to migrate manually to learn some lessons.
I wasn’t sure about bothering with UKI. I wanted zfs running, and that meant moving to the linux-lts kernel package for arch.
Given systemd-boot’s superior (at current time) support for owner keys, boot time unlocking and direct efi boot, I’ve been using that. However, it works differently if you use plain kernels, compared to if you use UKI. Plain kernels use a loader file to point to the correct locations for the initramfs and the kernel, which existed on this box.
I installed the linux-lts package, all good. I removed the linux kernel package, and something in the pacman hooks failed. The autosigning process for the secure-boot setup couldn’t find the old kernel files when it regenerated my initramfs, but happily signed the new lts ones. Cool, I thought, I’ll remove the old ones from the database, and re-enroll my os drive with systemd-cryotenroll after booting on the new kernel (the PCRs I’m using would be different on a new kernel, so auto-decrypt wouldn’t work anyway.)
So, just to be sure, I regenerated my initram and kernel with mkinitcpio -p linux-lts, everything worked fine, and rebooted. I was greeted with:
Reboot to firmware settingsas my only boot option. Sigh.
Still, I was determined to learn something from this. After a good long while of reading the arch wiki and mucking about with bootctl (PITA in a live CD booted system) I thought about checking my other machines. I was hoping to find a bootctl loader entry that matched the lts kernel I had on other machines, and copy it to this machine to at least prove to myself that I had sussed the problem.
After checking, I realised no other newer machine had a loader configuration actually specifying where the kernel and initram were. I was so lost. How the fuck is any of this working?
Well, it turns out, if you have UKI set up, as described, it bundles all the major bits together like the kernel, microcode, initram and boot config options in to one direct efi-bootable file. Which is automatically detected by bootctl when installed correctly. All my other machines had UKI set up and I’d forgotten. That was how it was working. Unfortunately, I had used archinstall for setting up UKI, and I had no idea how it was doing it. There was a line in my docs literally telling me to go check this out before it bit me in the ass…
…
…
So, after that sidetrack, I did actually prove that the kernel could be described in that bootctl loader entry, then I was able to figure out how I’d done the UKI piece in the other machines, and applied it to this one, so it matched and updated my docs…
…
UKI configuration is in mkinitcpio default configs, but needs changing to make it work.
…
Turns out my Christmas wish came true, I learned I need to keep better notes.