Ahoy m@tes, the scraping bot situation has been escalating recently, as you all may have already noticed by the recent site instability and 5xx error responses. @tenchiken@anarchist.nexus has been scrambling to block new scraping subnets as they appear, but these assholes keep jumping providers so it’s been an endless loop and constant firefighting.

I finally had enough and decided to onboard a Proof-of-Work countermeasure, very much like Anubis which has been very popular on the fediverse lately. However I went with Haphash which has been especially designed around haproxy (our reverse proxy of choice) and is hopefully much more lightweight.

The new PoW shield has already been activated on both Divisions by Zero on Fediseer as well. It’s not active on all URLs,. but it should be protecting those which have the most impact on our database, which is what was causing the actual issue. You should notice a quick loading screen on occasion while it’s verifying you.

We’ve already seen a significant reduction in 5xx HTTP errors, as well as a slight reduction in traffic, so we’re hoping this will make a good impact in our situation.

Please do let us know if you run into any issues, and also let us know if you feel any difference in responsiveness. The first m@ates already feel it’s all snappier, but that just be placebo.

And let’s hope the next scraping wave is not pwned residential botnets, or we’re all screwed >_<

    • zr0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      41
      ·
      5 days ago

      Scraping is neither new, nor always malicious. Without scraping, no search engine would work and there would be no archive.org wayback machine.

      However, AI scrapers all copy the same shit over and over again and do not intend to lead traffic to your site. They just cause cost and don’t give anything in return. This is the problem.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 days ago

        Honestly, My head says lemmy should be search indexed to drive traffic here, but my heart says i don’t need lemmy to be indexed by google to enjoy it and i’d rather not have the rest of reddit over here stinking up the place :)

    • cassandrafatigue@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      10
      ·
      5 days ago

      Uh actually the word ‘robot’ was originally a regional term for serfs. Are you implying serfs aren’t/weren’t people, and the internet should only be for the upper classes?

  • ramble81@lemmy.zip
    link
    fedilink
    arrow-up
    41
    ·
    5 days ago

    I do not understand the point of scraping Lemmy. Just set up your own instance, or hell, just mimic the open AcitivityPub protocol and get all the results delivered to you in a nicely packaged json file for you to parse however you want.

    • CameronDev@programming.dev
      link
      fedilink
      arrow-up
      58
      ·
      5 days ago

      Proof of work means that your client has to do some “work” in order to gain access. It typically means a challenge that can’t be trivially solved, but can be trivially verified.

      For example, the challenge may be something to the effect of:

      “Give me a string, that when hashed by md5, results in a hash that ends in 1234”.

      Your browser can then start bruteforcing until it finds a string (should take a few seconds max), and then it can pass the string back to the server. The server can verify with a single hash, and you’re in.

      Its not wildly different to crypto mining, but the difficulty is much lower for antibot, as it needs to be solveable in seconds by even low end devices.

        • mic_check_one_two@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          1
          ·
          5 days ago

          Two things: First, bots don’t typically allow JavaScript. No JS, no entry. A user can temporarily enable JS if they’re stuck on an endless loading screen. But a scraper won’t.

          Second, the fact that they’d need to solve them for every single bot, and every single site they scrape. It’s a low barrier for regular users, but it’s astronomical for scrapers who are running hundreds of thousands of bots.

        • kernelle@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          9
          ·
          5 days ago

          Cost of electricity for the most part. Having a scraper visit 100’s of URL’s per second isn’t unheard of, adding this should reduce the speed of the same scraper by 30-70% depending on the request

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        Funny, HTTPS is computationally-expensive for similar reasons, but I guess this system works across sessions, with a front-loaded cost.

        • CameronDev@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          5 days ago

          Its usually designed so that you can’t rainbow table.

          give me a string that starts with “xyz”, and hashes to “000…”

          That can’t be rainbow tabled, as the server can force a different salt.

          (Note, I dont know the exact algorithms involved, just the general theory)

  • zaknenou@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 days ago

    always interesting choices with software on our ship, as the motto says:

    Be Weird, Download a Car, Generate Art, Screw Copyrights, Do Maths

    • tenchiken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      Most clients use an API connection which is specific to Lemmy. This is extra work to make scrapers speak that language so no scraper does.

      For the moment, a scraper trying to hit any API endpoint would just get a simple malformed request error. With any luck, it stays this way so we don’t have to protect API directly.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    5 days ago

    This is amazing, this type of anti-bot access should be rolled out everywhere. I wouldn’t mind my battery life being cut by 10% just to access bot free content.

    • Venia Silente@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I would, however. I don’t know if electricity with the correct voltage and amperage just grows in trees up there in the US, but in the rest of the world, we have to pay up for electricity, and having to consume more of it also means a larger damage to our local environment, already preyed upon by northern-hemisphere corporations.

      Not to mention, it effectively raises our power bill for no new gain as well, which comes with a very bad timing due to recent scandals (up to Constitutional Accusation Summons) in how the costs of energy transportation are being billed to users in my country. Besides all the local increases of cost, it mechanically functions not very different from rent-seeking.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      I may be misunderstanding this measure but I don’t think that’s going to be mitigated.

      If I understand correctly, this requires browsers requesting a page to do a small amount of “work” for no reason other than demonstrating they’re willing to do it. As a once off for devices used by humans, it’s barely noticeable. For bots reading millions of pages it’s untenable - they’ll just move on to easier targets.

      However, that only works for bots who’s purpose is to injest large quantities of text.

      For a bot who’s purpose is to make posts, or upvote things, or reply to other comments, they’re much less sensitive to this measure because they don’t need to harvest millions of pages.