My least favorite fun fact is that Reddit forced the KiA mod to reopen after they went private calling it a “cancer”.
I was a mod at the time and Reddit always told us we had an extreme degree of editorial independence (hence the justification for allowing r/jailbait, /greatawakening, r/coontown etc) but that event made me consider for the first time that exposing normies to propaganda might not just be a side-effect, but a core function of the company.
The most difficult parts of moderating on Reddit aren’t the trolls or spammers or even the rule-breakers, it’s identifying the accounts who intentionally walk the line of what’s appropriate.
IMO only a human moderator can recognize when someone is being a complete asshole but “doing it politely”, or trying to push an agenda or generally behaving inauthentically, because human moderators are (in theory) members of the community themselves and have an interest in that community being enjoyable to be a part of.
Humans are messy, and finding the right balance of mess to keep things interesting without making a place overwhelming to newcomers is a fine balance to strike that I just don’t believe an AI can do on it’s own.