

That would be true, except almost every country on the planet has made it illegal to admit the thoughts, so they can’t get help. They can’t help that they have the thoughts and then they can’t get help to fix it. We’ve (seemingly all of humanity) made their existence illegal and then refuse the social accountability of helping them correct things. I think it was an Australian psychiatrist who set up the anonymous hotline for people who wanted help. That’s the only time and place I’ve heard about people able to get help in this way.
And, to add insult to injury, it really seems to be almost entirely created by trauma. So, the people who would victimize are victims themselves and our universal stance is “sorry, you can’t exist.”




I can’t speak for everyone with my comment here. I really, really hate the side effects of the AI bubble. Where they force it into every fucking element of life. And honestly, I hate even that it’s being called AI. I hate that it’s made traditional algorithmic services worse like Google Now for example. And I hate the lack of privacy it’s brought.
That said, I have no problems with someone using it to complete a project with it. I think it could potentially raise a potential device security concern in a case like this. And probably wouldn’t partake in the app because of that. But I use my own self-hosted LLMs all the time. It helps me format things, saves me time with searching and researching when I want an answer, not knowledge.
There are many rational reasons to dislike and avoid “AI”. As well, there’s many irrational people. And there are some good uses for the technology.