• palordrolap
    arrow-up
    237
    arrow-down
    0
    ·
    8 months ago
    link
    fedilink

    Put something in robots.txt that isn’t supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.

    Imperfect, but can’t think of a better solution.

    • LvxferreEnglish
      arrow-up
      127
      arrow-down
      0
      ·
      8 months ago
      edit-2
      8 months ago
      link
      fedilink

      Good old honeytrap. I’m not sure, but I think that it’s doable.

      Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling the honeytrap page through robots.txt.

      Then if some crawler still accesses it, you could record+ban it as you said or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.

      • CosmicTurtleEnglish
        arrow-up
        59
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.

        Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.

        I’d love to see something similar with robots.

        • LvxferreEnglish
          arrow-up
          32
          arrow-down
          0
          ·
          8 months ago
          edit-2
          8 months ago
          link
          fedilink

          Yup, it’s the same approach as email spam traps. Except the naughty list, but holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.

          • NighedEnglish
            arrow-up
            12
            arrow-down
            0
            ·
            8 months ago
            link
            fedilink

            but with all of the cloud resources now, you can switch through IP addresses without any trouble. hell, you could just browse by IP6 and not even worry with how cheap those are!

            • LvxferreEnglish
              arrow-up
              12
              arrow-down
              0
              ·
              8 months ago
              link
              fedilink

              Yeah, that throws a monkey wrench into the idea. That’s a shame, because “either respect robots.txt or you’re denied access to a lot of websites! is appealing.

              • NighedEnglish
                arrow-up
                1
                arrow-down
                5
                ·
                8 months ago
                link
                fedilink

                That’s when Google’s browser DRM thing starts sounding like a good idea 😭

      • thefactremainsEnglish
        arrow-up
        11
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        Even better. Build a WordPress plugin to do this.

      • KairuByteEnglish
        arrow-up
        9
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        I’m the idiot human that digs through robots.txt and the site map to see things that aren’t normally accessible by an end user.

      • lolEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        8 months ago
        edit-2
        8 months ago
        link
        fedilink

        deleted by creator

        • LvxferreEnglish
          arrow-up
          6
          arrow-down
          0
          ·
          8 months ago
          link
          fedilink

          For banning: I’m not sure but I don’t think so. It seems to me that prefetching behaviour is dictated by a page linking another, to avoid any issue all that the site owner needs to do is to not prefetch links for the honeytrap.

          For poisoning: I’m fairly certain that it doesn’t. At most you’d prefetch a page full of rubbish.

    • BlackmistEnglish
      arrow-up
      22
      arrow-down
      1
      ·
      8 months ago
      link
      fedilink

      “Help, my website no longer shows up in Google!

    • PM_Your_Nudes_PleaseEnglish
      arrow-up
      16
      arrow-down
      0
      ·
      8 months ago
      link
      fedilink

      Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.

      I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.

      • LinkEnglish
        arrow-up
        13
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        That’s a bit annoying as it means you can’t 100% the game as there will always be one achievement you can’t get.

        • OmniraptorEnglish
          arrow-up
          8
          arrow-down
          5
          ·
          8 months ago
          link
          fedilink

          perhaps not every game is meant to be 100% completed

      • CileTheSaneEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        There are tools that just flag you as having gotten an achievement on Steam, you don’t even have to have the game open to do it. I’d hardly call that ‘hacking’.

    • UltravioletEnglish
      arrow-up
      6
      arrow-down
      0
      ·
      8 months ago
      edit-2
      8 months ago
      link
      fedilink

      Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.

      • odelikEnglish
        arrow-up
        3
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        Maybe one of the lorem ipsum generators could help.

    • Aatube
      arrow-up
      10
      arrow-down
      46
      ·
      8 months ago
      link
      fedilink

      robots.txt is purely textual; you can’t run JavaScript or log anything. Plus, one who doesn’t intend to follow robots.txt wouldn’t query it.

      • BrianTheeBiscuiteerEnglish
        arrow-up
        55
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:

        here-there-be-dragons.html
        

        Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.

          • PlexSheepEnglish
            arrow-up
            16
            arrow-down
            0
            ·
            8 months ago
            link
            fedilink

            Nice idea! Better use /dev/urandom through, as that is non blocking. See here.

            • Aniki 🌱🌿English
              arrow-up
              1
              arrow-down
              1
              ·
              8 months ago
              link
              fedilink

              That was really interesting. I always used urandom by practice and wondered what the difference was.

          • Aniki 🌱🌿English
            arrow-up
            3
            arrow-down
            1
            ·
            8 months ago
            edit-2
            8 months ago
            link
            fedilink

            I wonder if Nginx would just load random into memory until the kernel OOM kills it.

        • gravitas_deficiencyEnglish
          arrow-up
          11
          arrow-down
          3
          ·
          8 months ago
          link
          fedilink

          I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.

      • ShitpostCentralEnglish
        arrow-up
        16
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        You’re second point is a good one, but you absolutely can log the IP which requested robots.txt. That’s just a standard part of any http server ever, no JavaScript needed.

        • GenderNeutralBroEnglish
          arrow-up
          11
          arrow-down
          0
          ·
          8 months ago
          link
          fedilink

          You’d probably have to go out of your way to avoid logging this. I’ve always seen such logs enabled by default when setting up web servers.

      • ricecakeEnglish
        arrow-up
        12
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.

        Like, 10 minutes work to do right. You don’t need js to do it at all.