Reddit CEO says facial verification may be introduced. Ostensibly to prevent bots.

But we all know how dangerous this can be. But most likely Reddit users will just accept it.

Although they have a great free analogue right under their noses - Lemmy. Which is many times better than its competitor.

I wish more people would discover Lemmy, but that’s unlikely.

  • Alaknár@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    The human is smarter

    So, you want to hire hundreds of thousands of moderators? The human is smarter, yeah, but not the bot doing the detection.

    If they tune them, you use the methods and knowledge learned and adapt

    You say it like “tuning them” is a magic trick, where they wave their hands a couple of times, and now the detection algorithms are smarter than the bots writing the comments. SOMEONE has to go in, and figure out the maths to make the detection algorithms smarter and better at detecting. That takes time and resources.

    You’re also forgetting that “tuning them” works both ways. The people writing the shit-post bots also work on improving their tools, to make them indistinguishable from human posts.

    Also: how can you tall that “lol, kys noob” is written by a human, or by a bot? The vast majority of comments online are these short shit-comments.

    I’m just saddened by the state of things and how much better everybody else is at things I always thought the left was good at

    1. 4chan is not “magically” “good” at “OSINT”. They fuck up a lot of things too. It just so happens that what they’re most famous for required one dude who wrote a script, a bunch of kids with bandwidth to spare.
    2. Their OSINT is super iffy, hit-and-miss. Much like Reddit’s. Or any other large enough community’s.
    3. What @AnotherUsername@lemmy.ml said.
    • Melvin_Ferd@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 days ago

      So, you want to hire hundreds of thousands of moderators? The human is smarter, yeah, but not the bot doing the detection.

      I don’t know where this is coming from. Nobody is being hired. If anything I’m becoming more anti-mod lately. I feel like put boxes on things that suck oxygen out of the room rapidly. But that’s a different discussion.

      Maybe I’m reading this wrong but to clarify I am not saying we need to build our own bot detection but I would be a nice have eventually. I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever drives us to post bean moth lemmy slop and instead focus on collection of the worst bot infestations. There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts. They’re also in their infancy which means it will only get harder going forward.

      You or I aren’t able to avaliable accurately tell right now. Have you ever seen the Sinclair news video? The one where every news station repeats the same dialogue. Can you or I flip on the news any day of the week and call that out, unlikely. But we can logical understand it is something that happens. It becomes obvious there is a script only when you collect the data and begin to analyze it. That is what I’m saying we need to figure out and gamify.

      Name generation, text, patterns. At the start it won’t be accurate. But as more data is collected it’ll become obvious. If the bots were that good, these websites would have left their APIs open. But they closed them so we can’t collect this data. I’m the type of person when powerful people do something like that, I want to know why and work around that. It’s not a coincidence that they locked their sites up when people were given tools where anyone could collect data and feed it into AI for analysis.

      Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.

      • Alaknár@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever (…) and instead focus on collection of the worst bot infestations.

        That’s what “being a moderator” is, mate. You want hundreds of thousands of moderators.

        There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts

        You’re wrong.

        It becomes obvious there is a script only when you collect the data and begin to analyze it.

        You just said:

        I am not saying we need to build our own bot detection

        So, which is it?

        It becomes obvious there is a script only when you collect the data and begin to analyze it.

        There’s a massive difference between local news stations receiving a script to read out, and a bot farm having a “be negative, unfriendly, sow chaos” instruction.

        At the start it won’t be accurate

        So, it just won’t work? Got it.

        But as more data is collected it’ll become obvious

        I don’t think you undersand what you’re talking about. Don’t get me wrong, I’m not trying to be contrarian here, I just honestly think that your idea of “AI bots” is kind of like “we have prepared one million sentences, and now our bots will be picking between them to generate whole posts on social networks”.

        I mean, sure, there can be patterns - like the whole “LinkedIn post” style, where most of the time it’s fairly obvious that you’re reading an AI-generated slop… But that’s not what state-entities or even just hackers use. They have access to much more sophisticated content.

        If the bots were that good, these websites would have left their APIs open.

        Reddit’s API is no longer open. Didn’t do a thing to stop bots.

        But they closed them so we can’t collect this data

        You don’t need however many API keys to collect that kind of data. At least not from Reddit.

        Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.

        Your proposed action is the equivalent of Sisyphus and his stone. Because you really seem to be forgetting that the AI tech is getting better all the time. And that any AI-detection actions you take feed that process. “Oh, they’ve detected these posts? OK, let’s tweak the algo until we get through and then flood them with our content”.

        Let’s even assume that you somehow pull it off and get a 100% detection rate as of right now. Six months down the line that will go down to 20%. Etc. etc. And you’ll be catching thousands of legitimate users in the crossfire.

        An anonymous “proof of humanity” token solves all AI issues without anyone having to spend billions on research and manpower.