

it just means they’ll be a passive node, but still able to seed if they connect to the other node (edited). It’s the setup I have and I manage to keep an overall ratio >1, especially if the torrent is popular.


it just means they’ll be a passive node, but still able to seed if they connect to the other node (edited). It’s the setup I have and I manage to keep an overall ratio >1, especially if the torrent is popular.


if you use this often, you can add a keyword search (firefox-based browsers) or a custom site search (chromium-based) with this URL
https://icon-sets.iconify.design/?query=%25s
(use %s after equals; some lemmy front-ends seem to be rendering it wrong)
and a shortcut e.g. icon
so everytime you enter e.g. icon person in a new tab, it’ll run the search for you
you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.
Doubt it. The incentives don’t align. They benefit from open source much more than are threatened by it. Even that “embrace, extent, extinguish” idea comes from different times and it’s likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.


yes, the system will likely use some swap if available even when there’s plenty of free RAM left:
The casual reader1 may think that with a sufficient amount of memory, swap is unnecessary but this brings us to the second reason. A significant number of the pages referenced by a process early in its life may only be used for initialisation and then never used again. It is better to swap out those pages and create more disk buffers than leave them resident and unused.
Src: https://www.kernel.org/doc/gorman/html/understand/understand014.html
In my recently booted system with 32GB and half of that free (not even “available”), I can already see 10s of MB of swap used.
As rule of thumb, it’s only a concern or indication that the system is/was starved of memory if a significant share of swap is in use. But even then, it might just be some cached pages hanging around because the kernel decided to keep instead of evicting them.


if my system touches SWAP at all, it’s run out of memory
That’s a swap myth. Swap is not an emergency memory, it’s about creating a memory reclamation space on disk for anonymous pages (pages that are not file-backed) so that the OS can more efficiently use the main memory.
The swapping algorithm does take into account the higher cost of putting pages in swap. Touching swap may just mean that a lot of system files are being cached, but that’s reclaimable space and it doesn’t mean the system is running out of memory.


potentially relevant: paperless recently merged some opt-in LLM features, like chatting with documents and automated title generation based on the OCR context extracted.


no problem. I can see that, at the same time, the directory of the place I work at has 20x that number and finding someone is never an issue, so I also don’t bother cleaning up my local list.


of course most are not used, that is fine, I don’t understand why anyone would bother deleting “unused” contacts


I have over 800 and I’m not even a salesperson or anything like that; that’s mostly from exchanged emails over the years


along with the compose.yaml file, unless I need it in a different drive for any reason
btw, the prices of managed runners are going down, not increasing
https://docs.github.com/en/billing/reference/actions-runner-pricing#standard-github-hosted-runners
still good to have a self-hosted alternative though
ah right, my bad
fwiw, you can self host a GitHub actions runner


I’m the only user of my setup, but I configure docker compose stacks, use configs as bind mounts, and track everything in a git repo synchronized every now and then.


about the same specs as my TV, but 10" less and 4x its price


deleted by creator


deleted by creator


def possible, cloudflare DDoS their own dashboard a few months ago with some react code
https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-12-dashboard-and-api-outage/


That’s the neat part: you don’t. If their idea of anti cheat means taking over my machine to scan everything that runs on it, it’s a lost battle. Either find a way to do it server side based on behavioral heuristics, or don’t bother.
I don’t quite get what this is supposed to do. Is it basically a software to allow jellyfin/plex users to request media without needing a radarr/sonarr account?