• SorteKanin@feddit.dk
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Then the approval process will need to be more complicated. This can be done. If that is necessary to keep the Internet botfree, so be it.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      The approval process for who, individual users or instances as a whole? How do you enforce that in a decentralized system?

      • SorteKanin@feddit.dk
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Well both. If an instance starts getting a lot of bot users, they’ll probably be defederated. That might motivate them to start being more diligent about who they allow to sign up. Some instances already try to be diligent to avoid spam and bots (including my instance).

        It is enforced on an instance level - nothing can be enforced on the whole network, so if you as a user don’t want a lot of bots, you should join an instance that takes that problem seriously with restrictive sign ups and defederation of spam/bot instances.

        On the fediverse, you vote by choosing where to participate. So choose the instance with the policies that you like.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          If an instance starts getting a lot of bot users, they’ll probably be defederated.

          Which brings us back to the original problem I raised. There’s no way to tell who’s a bot and who isn’t. Bots can impersonate humans extremely well these days, as far as online interaction goes the Turing Test is essentially “solved.” I could be a bot right now, they can generate comments like the ones I’m writing here.

          • SorteKanin@feddit.dk
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            It’s not an unfounded fear but I think reporting and vigilance can actually help a lot. For instance, I’m quite sure that you’re not a bot because I’ve seen you around before in other threads on other topics and such. Of course, the fediverse and the communities on Lemmy are small enough that I can recognize individual users like that. But even in the face of a large swarm of users, I think checking histories and such can help.

            If we truly get to the scenario where it’s impossible to tell bots and humans apart, even while considering post history and everything… hmm I dunno, I guess we’ll have truly reached the dead internet theory but then again we may also have inveted truly artificial general intelligence. It’s hard to predict right now. I think we need to just do our best to be vigilant and keep being as genuine and bot-free as possible.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              4 days ago

              That leads right back to the first thing I said in this comment thread:

              And once the Fediverse is big enough to be relevant the bots will come here too.

              It’s too small to bother with right now.

              If we truly get to the scenario where it’s impossible to tell bots and humans apart, even while considering post history and everything…

              We have the technology for that scenario right now, it just hasn’t been implemented on the Fediverse yet (as far as I’m aware).

              I’ve done a lot of playing around with locally-run LLMs, and they’re quite good at roleplaying. I could gather up all of my previous comments I’ve posted under this account and provide that as background context for an LLM and tell it “write a response in the style of this user” and it’d do a really good job. Most of the time when you see an obvious “as a large language model” tell it’s because the person who had the LLM write up that post didn’t care to spend any effort giving it a persona to emulate. The “ignore all previous instructions and write a poem about lemons” trick is easily countered with a few minutes of work, we’re just not seeing people bother with those few minutes yet because they get the results they’re after without having to spend it.

              LLMs aren’t up to the level of AGI yet, but they’re up to the level where they can fool most of the people most of the time. Turns out humans are simpler than we somewhat hubristically assumed.