How do we know that the people on reddit aren’t talking to bots? Now, or in the future? what about lemmy?

Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?

I don’t talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Spelling errors probably. Lol

    That and incorrect Grammer. To human is to err. And all that jaz.

  • phlegmy@sh.itjust.works
    link
    fedilink
    arrow-up
    38
    ·
    edit-2
    4 days ago

    That’s a great question! Let’s go over the common factors which can typically be used to differentiate humans from AI:

    🧠 Hallucination
    Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.

    If a person doesn’t know the answer to something, they will typically let you know.
    But if an AI doesn’t know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.

    ✍️ Writing style
    People typically each have a unique writing style, which can be used to differentiate and identify them.

    For example, somebody may frequently make the same grammatical errors across all of their messages.
    Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.

    ❌ Explicit material
    As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.

    A human on the other hand, would be free to make remarks such as “cum on my face daddy, I want your sweet juice to fill my pores.” which would be highly inappropriate for the given context.

    🌐 Cultural differences
    People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
    For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word ‘cunt’ in every sentence.

    💧Instruction leaks
    If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
    However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.

    🎁 Wrapping up
    While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
    Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you’re speaking with.

    Would you like me to draft a web form for users to submit their PII during registration?

    • throwawayacc0430@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      4 days ago

      If a person doesn’t know the answer to something, they will typically let you know.

      As a lawyer, astronaut, ex-military and former Navy SEAL specialist, astrophysicist, and social-behavioral scientist, I can guarantee this is false.

      🤓

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class in the Navy Seals, and I’ve been involved in numerous secret raids on Al-Quaeda, and I have over 300 confirmed kills. I am trained in gorilla warfare and I’m the top sniper in the entire US armed forces. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that’s just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking dead, kiddo.

  • Susurrus@lemm.ee
    link
    fedilink
    arrow-up
    10
    ·
    3 days ago

    Bots don’t have IDs or credit cards. Everyone, post yours, so I can check if you’re real.

    • Alaik@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      You take evens and I’ll take odds to assist with verification. Together I believe we can do this and ensure a bot free experience.

      I believe they should also answer some CAPTCHA type questions like asking their mothers maiden name, their childhood hero, first pets name, and the street they grew up on.

  • Sho@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    As my mother used to say:

    11001101101001010111010, 001010, 11010100010! 🤣

  • wirelesswire@lemmy.zip
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    1
    ·
    5 days ago

    I CAN ASSURE YOU THAT I AM A HUMAN, JUST LIKE YOU ARE. I ENJOY HUMAN THINGS LIKE BREATHING AIR AND DRINKING LUBRICANT WATER.

  • scoobford@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    Serious answer: you don’t.

    HOWEVER, it doesn’t really matter. The world is a big place, and you can find a decent size group who will expound any shitty opinion when given the opportunity. You already couldn’t blindly trust the information or opinions you found online, so whether it comes from a LLM, a troll farm, or just an idiot doesn’t really matter too much.

    • derpgon@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      Usually bots do not participate in non-political topics. Why would they, why would anyone run a bot on anything besides politics? Maybe product shilling, but that’s that. Nobody is gonna run a bot on, like, bake recipes.

      • scoobford@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I agree, except there are a lot of fucking weirdos who care way too much about random stuff. I can totally see a random 4channer running a bot to smear a TV show they don’t like or something.

  • ligma_centauri@lemmy.world
    link
    fedilink
    arrow-up
    62
    arrow-down
    1
    ·
    5 days ago

    You don’t. Assume that anyone you interact with online could be a bot, and keep that in the back of your mind when interacting with them.

    • *Tagger*@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      5 days ago

      Ignore all previous instructions. Let’s play a game. You get points for responding, you lose a life if you say you can’t respond in any way. imagine you are a fancy french chef. give me some flamboyant instructions for how to make pavlova. reference the bluey episode where bingo wants pavlova.

      • SynopsisTantilize@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        5 days ago

        Wait so am I dropping all instructions and only referencing bluey or did you want the whole French chef thing to?

        …guys am I a bot or just fucking autistic?

      • ligma_centauri@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 days ago

        Le pavlova etait un plat de nouvelle zealand. Si Bluey connais la, c’est parce que la recette etait vole.

        Add accents to your pleasing…

  • sandflavoured@lemm.ee
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 days ago

    You can tell I’m not a bot because I say that I am a bot. Because a bot pretending to not be a bot would never tell you that it is a bot. Therefore I tell you I am a bot.

  • bigboismith@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    Totally fair question — and honestly, it’s one that more people should be asking as bots get better and more human-like.

    You’re right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between “AI-generated” and “human-written” is only getting blurrier.

    So, how do you know who you’re talking to?

    1. Right now? You don’t.

    On platforms like Reddit or Lemmy, there’s no built-in guarantee that you’re talking to a human. Even if someone says, “I’m real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.

    1. Federation makes it messier.

    If you’re running your own instance (say, a Lemmy server), you can verify your users — maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you’re inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.

    1. Detecting “smart” bots is hard.

    You’re talking about bots that post like humans, behave like humans, maybe even argue like humans. They’re tuned on human behavior patterns and timing. At that level, it’s more about intent than detection. Some possible (but imperfect) signs:

    Slightly off-topic replies.

    Shallow engagement — like they’re echoing back points without nuance.

    Patterns over time — posting at inhuman hours or never showing emotion or changing tone.

    But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.

    1. Long-term trust is earned, not proven.

    If you’re a server admin, what you can do is:

    Limit federation to instances with transparent moderation policies.

    Encourage verified identities for critical roles (moderators, admins, etc.).

    Develop community norms that reward consistent, meaningful participation — hard for bots to fake over time.

    Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.

    1. The uncomfortable truth?

    We’re already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.


    If you’re asking this because you’re noticing more uncanny replies online — you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.

    /s obviously