Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.

https://en.wikipedia.org/wiki/Bromism

However, a man was poisoned in 2025, after a suggestion of ChatGPT to replace sodium chloride in his diet with sodium bromide; sodium bromide is a safe replacement only for non-nutritional purposes, i.e., cleaning.[3][4][5]

  • ToastedPlanet@lemmy.blahaj.zoneOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    This is why I’m glad I’m on lemmy and not reddit. And why we should not want to be allowing AI generating messages on this website in lieu of comments and posts from other people.

  • foggy@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    2 days ago

    Y’all ever read that thread of the guy getting addicted to heroin? Truly surreal.

    Just a bored guy decides to get something new from his dealer and posts about it on reddit. The next 2 years comments are a cautionary tale.

    /U/SpontaneousH for anyone morbidly curious.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      Alternate link to somewhat prevent Google from interlinking us with you quite so tightly

      Original reddit link:

      https://old.reddit.com/r/BORUpdates/comments/16223aj/updatesaga_the_emotional_saga_of_spontaneoush_the/
      

      Automated summary:

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    Is this “How do I remove a small cylinder from another small cylinder? It is imperative that the small cylinder remain unharmed.” but for AI?

      • RightHandOfIkaros@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        To be fair, actual people suggest harmful stuff to people online probably way more often than LLMs do. The AI had to learn it from somewhere, they didn’t create that behavior on their own.

        • ToastedPlanet@lemmy.blahaj.zoneOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          If the AI was a person the mods could have banned them. The developers had to patch how the AI responded to stimuli to prevent this behavior.

          The problem isn’t only the bad behavior. It’s the automation of the bad behavior that enables systems and essentially tool assisted people to mass produce the bad behavior in a way that can’t be managed without aggressive moderation.

          Also, that sucks that the filter got applied to the article. It wasn’t there when I read it initially.