Please. Captcha by default. Email domain filters. Auto-block federation from servers that don’t respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not “thousands of dollars” spent.

  • HTTP_404_NotFound@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    2 years ago

    Sigh…

    All of those ideas are bad.

    1. Captchas are already pretty weak to combat bots. It’s why recaptcha and others were invented. The people who run bots, spend lots of money for their bots to… bot. They have accessed to quite advanced modules for decoding captchas. As well, they pay kids in india and africa pennies to just create accounts on websites.

    I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.

    1. Email domain filters.

    Issue number one, has already been covered below/above by others. You can use a single gmail account, to basically register an unlimited number of accounts.

    Issue number two. Spammers LOVE to use office 365 for spamming. Most of the spam I find, actually comes from *.onmicrosoft.com inboxes. its quick for them to spin it up on a trial, and by the time the trial is over, they have moved to another inbox.

    1. Autoblocking federation for servers who don’t follow the above two broken rules

    This is how you destroy the platform. When you block legitimate users, the users will think the platform is broken. Because, none of their comments are working. They can’t see posts properly.

    They don’t know this is due to admins defederating servers. All they see, is broken content.

    At this time, your best option is for admin approvals, combined with keeping tabs on users.

    If you notice an instance is offering spammers. Lets- use my instance for example- I have my contact information right on the side-bar, If you notice there is spam, WORK WITH US, and we will help resolve this issue.

    I review my reports. I review spam on my instance. None of us are going to be perfect.

    There are very intelligent people who make lots of money creating “bots” and “spam”. NOBODY is going to stop all of it.

    The only way to resolve this, is to work together, to identify problems, and take action.

    Nuking every server that doesn’t have captcha enabled, is just going to piss off the users, and ruin this movement.

    One possible thing that might help-

    Is just to be able to have an easy listing of registered users in a server. I noticed- that actually… doesn’t appear to be easily accessible, without hitting rest apis or querying the database.

    • Dessalines@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 years ago

      This is all 100% correct. People have already written captcha-bypassing bots for lemmy, we know from experience.

      The only way to stop bots, is the way that has worked for forums for years: registration applications. At lemmy.ml we historically have blocked any server that doesn’t have them turned on, because of the likelihood of bot infiltration from them.

      Registration applications have 100% stopped bots here.

      • eyy@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 years ago

        You’re right that captchas can be bypassed, but I disagree that they’re useless.

        Do you lock your house? Are you aware that most locks can be picked and windows can be smashed?

        captchas can be defeated, but that doesn’t mean they’re useless - they increase the level of friction required to automate malicious activity. Maybe not a lot, but along with other measures, it may make it tricky enough to circumvent that it discourages a good percentage of bot spammers. It’s the “Swiss cheese” model of security.

        Registration applications stop bots, but it also stops legitimate users. I almost didn’t get onto the fediverse because of registration applications. I filled out applications at lemmy.ml and beehaw.org, and then forgot about it. Two days later, I got reminded of the fediverse, and luckily I found this instance that didn’t require some sort of application to join.

        • HTTP_404_NotFound@lemmyonline.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 years ago

          Don’t read the first sentence, and then glaze over the rest.

          I am not saying captchas are completely useless, they do block the lowest hanging fruit currently. That- being most of the script kiddies.

    • eyy@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Haven’t you heard of the “Swiss cheese” model of security?

      The best way to ensure your server is protected is to unplug it from the Internet and put it in an EMF-shielded Faraday cage.

      There’s always a tradeoff between security, usability and cost.

      captchas can be defeated, but that doesn’t mean they’re useless - they increase the level of friction required to automate malicious activity. Maybe not a lot, but along with other measures, it may make it tricky enough to circumvent that it discourages a good percentage of bot spammers.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      I disagree. I think the solution is moderation. Basically, have a set of tools that identify likely bots, and let human moderators make the call.

      If you require admins to manually approve accounts, admins will either automate approvals or stop approving. That’s just how people tend to operate imo. And the more steps you put between people wanting to sign up and actually getting an account, the fewer people you’ll get to actually go through with it.

      So I’m against applications. What we need is better moderation tools. My ideal would be a web of trust. Basically, you get more privileges the more trusted people that trust you. I think that should start from the admins, then to the mods, and then to regular users.

      But lemmy isn’t that sophisticated. Maybe it will be some day, IDK, but it’s the direction I’d like to see things go.

  • BornVolcano@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Image Transcription: Meme


    [‘Man vs. Giant’ - Dramatic artwork depicting ‘Yhorm the Giant’ from ‘Dark Souls 3’ towering over the protagonist from ‘Dark Souls 3’, ‘Ashen One’. The giant figure holds a massive sword that is planted in the ground with both hands, while the comparitively tiny ‘Ashen One’ holds a regular sized sword in his right hand and adopts a fighting stance. Text placed over the stomach of the giant character, and over the smaller protagonist figure, reads as follows]

    BOTS

    LEMMY


    ^I’m a human volunteer transcribing posts in a format compatible with screen readers, for blind and visually impaired users!^

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 years ago

    Look up the origins of IRC’s EFNet, which was created specifically to exclude a server that allowed too-easy federation and thus became an abuse magnet.

  • Juliie@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    We need a distributed decentralized curated whitelist that new servers will apply to be on it and hopefully get a quick week max response after some kind of precisely defined anti spam/bot audit. Also then periodic checks of existing servers.

    Like crypto has transaction ledger confirmed some kind of notabot confirmation ledger chain.

    Weak side if bot servers get on whitelist somehow in enough numbers they can poison it

    Mind you this whitelist chain has nothing to do with content itself just whether it is AI/spam/bots or human

    • Shinhoshi@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      2 years ago

      BTW, it might be more inclusive language to use “allow list” and “block list”

      • TrueDahn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        I can’t imagine being so obsessed with race politics as to think that purely technical terms like “white list” and “black list”, which have never had any connection to race relations whatsoever, are somehow non-inclusive.

  • archchan@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I’m against email domain whitelists and captchas (at the very least Google’s captchas).

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 years ago

        It largely just trains their AI, and a lot of people don’t want to do that.

        Also, a lot of captcha implementations have issues with content blockers and whatnot.

      • arwag0l0n@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 years ago

        @archchan@lemmy.ml have a literary “arch” in their name. Do you really have to ask why a fan of arch linux is against anything that google has even touched?

  • draughtcyclist@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Everyone is talking about how these things won’t work. And they’re right, they won’t work 100% of the time.

    However, they work 80-90% of the time and help keep the numbers under control. Most importantly, they’re available now. This keeps Lemmy from being a known easy target. It gives us some time to come up with a better solution.

    This will take some time to sort out. Take care of the low hanging fruit first.

  • tyfi@wirebase.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Mine got blown up a day or two ago before I had enabled Captch. About 100 accounts were created before I started getting rate-limited (or similar) by Google.

    Better admin tools are definitely needed to handle the scale. We need a pane of glass to see signups and other user details. Hopefully it’s in the works.

  • Aux@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Lemmy is just getting started and way too many people are talking about defederation for any reason possible. What is even the point of a federated platform if everyone’s trying to defederate? If you don’t like federation so much, go use Facebook or something.

    • Nerd02@forum.basedcount.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      This. Defed is not the magic weapon that will solve all your problems. Captcha and email filters should be on by default though.

      • Aux@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        Just to add to that, imagine people would start defeding email. Like WTF is that even? Defed should not even be an option.

        • AgreeableLandscape@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          imagine people would start defeding email

          There are literally globally maintained blacklists of spam email sources. When people lease a static IP address the first thing to do is to check it against the major email blacklists.

        • Ludo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          Certainly it should. If you connect with a server that breaks all your core rules you shouldn’t force mods to deal with that constant stream of garbage. Just cut off the source.

            • Ludo@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 years ago

              Lol, no. Defederatioon is a tool. Sometimes it is the right call to use it. Go use Gab or something if you want a voat like hellhole filled with neo-nazis and Q-tard conspiracy nonsense. I don’t want to be part of a community that allows that shit though.

              • Aux@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 years ago

                Again, go use Facebook or Reddit. They will suit your needs and wishes.

                • Ludo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 years ago

                  Again, go use Gab or Daily Stormer. They protect the “freeze peach” (aka right wing hate speech) you are so concerned about.

    • Greenskye@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      My understanding from the beehaw defed is that more surgical moderation tools just don’t exist right now (and likely won’t for awhile unless the two Lemmy devs get some major help). Admins only really have a singular nuclear option to deal with other instances that aren’t able to tackle the bot problem.

      Personally I don’t see defederating as a bad thing. People and instances are working through who they want to be in their social network. The well managed servers will eventually rise to the top with the bot infested and draconian ones eventually falling into irrelevance.

      As a user this will result in some growing pains since Lemmy currently doesn’t offer a way to migrate your account. Personally I already have 3 Lemmy accounts. A good app front end that minimizes the friction from account switching would greatly help these growing pains.

  • Saik0@lemmy.saik0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 years ago

    Please stop trying to tell me how to run my instance. If I wanted your input or your rules I would have joined your instance.

    If you have a problem with my instance you’re in your right to defederate or block me. But I do not care about your plea to enable shit that I don’t want to enable.

    What I will say is that congrats! You’ve shown that you’re willing to bot manipulate your post. That earns a ban from my instance! That’s the glory of the Fediverse.

  • Cyclohexane@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    Auto-block federation from servers that don’t respect.

    NO! Do NOT defederate due to how an instance chooses to operate internally. It is not your concern. You should only defederate if this instance causes you repeated trouble offenses. Do not issue pre-emprive blanket blocks.

    • anteaters@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      If they choose not to take measures against bots defederation is the only way to keep that wave out of your own instance.

      • Cyclohexane@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 years ago

        Do not make assumptions on how other instances are operating. You don’t know what measures they’re taking. If they did not cause you trouble yet, don’t try to predict it by making generalizations. It creates an echo chamber Internet.

    • o_o@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 years ago

      Agree! Defederation is a nuclear option. The more we do it, the more we reduce the value of “the fediverse”, and the more likely we are to kill this whole project.

      I think defederation should only be a consideration if an instance is consistently, frequently becoming a problem for your instance over a large period of time. It’s not a pre-emptive action.

  • jollyroger@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    The admin https://lemmy.dbzer0.com/u/db0 from the lemmy.dbzer0.com instance possibly made a solution that uses a chain of trust system between instances to whitelist each other and build larger whitelists to contain the spam/bot problem. Instead of constantly blacklisting. For admins and mods maybe take a look at their blog post explaining it in more detail. https://dbzer0.com/blog/overseer-a-fediverse-chain-of-trust/

    • star_boar@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      db0 probably knows what they’re talking about, but the idea that there would be an “Overseer Control Plane” managed by one single person sounds like a recipe for disaster

    • mlaga97@lemmy.mlaga97.space
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Obviously biased, but I’m really concerned this will lead to it becoming infeasible to self-host with working federation and result in further centralization of the network.

      Mastodon has a ton more users and I’m not aware of that having to resort to IRC-style federation whitelists.

      I’m wondering if this is just another instance of kbin/lemmy moderation tools being insufficient for the task and if that needs to be fixed before considering breaking federation for small/individual instances.

      • Raiden11X@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        He explained it already. It looks for a ratio of number of users to posts. If your “small” instance has 5000 users and 2 posts, it would probably assume a lot of those users would be spam bots. If your instance has 2 users and 3 posts, it would assume your users are real. There’s a ratio, and the admin of each server that utilizes it can control the level at which it assumes a server is overrun by spam accounts.