• TriflingToadEnglish
    arrow-up
    3
    arrow-down
    0
    ·
    7 days ago
    link
    fedilink

    Here’s a video from MattKC who is a good technical YouTuber who’s website got shut down because the TikTok companies webcrawler just kept sending requests and took up bandwidth. Very cool vid and channel, highly recommend! https://youtu.be/Hi5sd3WEh0c

  • dinckelEnglish
    arrow-up
    297
    arrow-down
    7
    ·
    9 days ago
    link
    fedilink

    It’s illegal when a regular person steals something, but it’s innovation and courage, when a huge corporation steals something. Interesting how that works

    • beanEnglish
      arrow-up
      106
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      Honestly it’s fucking angering. So much regulation and geo-restrictions and licensing schemes but it’s cool that there are data brokers, and shit like this. On top of it all Chrome screwing us with manifest v3 and killing ad blocking on chrome. It’s already in canary build.

      WHAT THE FUCK IS WRONG WITH THIS SPECIES?!

      • MelodiousFunkEnglish
        arrow-up
        28
        arrow-down
        2
        ·
        9 days ago
        link
        fedilink

        WHAT THE FUCK IS WRONG WITH THIS SPECIES?!

        Yes.

      • Zombie-MantisEnglish
        arrow-up
        24
        arrow-down
        1
        ·
        8 days ago
        link
        fedilink

        WHAT THE FUCK IS WRONG WITH THIS SPECIES?!

        Capitalism.

      • undefinedEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        8 days ago
        link
        fedilink

        I get it that everyone wants ad blockers in their browser, but it doesn’t solve the problem of resources loading outside the browser.

        I think DNS or IP filtering is much more effective. I only bring it up because everyone uses apps all the time and I’m constantly seeing apps trying to connect to tracking domains.

      • LunchMoneyThiefEnglish
        arrow-up
        7
        arrow-down
        62
        ·
        9 days ago
        edit-2
        9 days ago
        link
        fedilink

        Google are actually doing really awesome work with manifest v3. A pimp needs to smack their b1tches around every once in a while to remind them who’s boss.

        • bokherifEnglish
          arrow-up
          23
          arrow-down
          0
          ·
          9 days ago
          link
          fedilink

          I’m glad there are at least some people enjoying this, you know being a bitch to big corps.

        • yoshisaurEnglish
          arrow-up
          12
          arrow-down
          0
          ·
          9 days ago
          link
          fedilink

          how’s that Google dick tasting?

    • Chozo
      arrow-up
      54
      arrow-down
      1
      ·
      9 days ago
      link
      fedilink

      They’re not stealing your data, they’re pirating it.

      • Guy DudemanEnglish
        arrow-up
        33
        arrow-down
        0
        ·
        9 days ago
        link
        fedilink

        They’re not pirating it. They’re collecting it.

        • ieatpwnsEnglish
          arrow-up
          23
          arrow-down
          0
          ·
          9 days ago
          link
          fedilink

          They’re not collecting it. They’re archiving it.

            • EvotechEnglish
              arrow-up
              34
              arrow-down
              0
              ·
              9 days ago
              link
              fedilink

              No, that’s stealing /s

              • AbidanYreEnglish
                arrow-up
                8
                arrow-down
                0
                ·
                9 days ago
                link
                fedilink

                Is the problem that wayback machine isn’t profiting from it?

                • _stranger_English
                  arrow-up
                  9
                  arrow-down
                  0
                  ·
                  9 days ago
                  link
                  fedilink

                  It’s not so much the lack of profits, more like the lack of kickbacks.

    • NateEnglish
      arrow-up
      37
      arrow-down
      1
      ·
      9 days ago
      link
      fedilink

      Aaron Schwartz killed himself over punishments for less

    • BlackmistEnglish
      arrow-up
      9
      arrow-down
      1
      ·
      8 days ago
      link
      fedilink

      Not that there’s anything right about anything right now, but a web crawler crawling the web hardly seems newsworthy. It’s not like everyone else’s crawlers haven’t been feeding data into giant AI mulchers for years now.

      This is just “you know that thing everyone else does? Now the Chinese do it too! Boooo!

    • tee9000English
      arrow-up
      2
      arrow-down
      9
      ·
      8 days ago
      link
      fedilink

      Do you even know what robots.txt is?

        • tee9000English
          arrow-up
          1
          arrow-down
          1
          ·
          7 days ago
          link
          fedilink

          And nobody in this scenario has done anything illegal.

          • eskimofryEnglish
            arrow-up
            2
            arrow-down
            0
            ·
            7 days ago
            edit-2
            7 days ago
            link
            fedilink

            If you have to defer to the law as justification for doing something purely selfish then people will judge you to be an asshole.

            Edit: Not you personally.

            • tee9000English
              arrow-up
              1
              arrow-down
              0
              ·
              7 days ago
              edit-2
              7 days ago
              link
              fedilink

              No they will judge you as being above the law (original commenter) and they will be wrong, which doesnt matter, as long as we feel continuity with our synthesized narrative.

              Because truth doesnt matter. Our narrative just needs to be as loud as the opposition and then we can confuse people just like those in power and then the impressionable people trying to understand whats going on or whats morally right will believe one side or the other and truth will not need to be discussed, because its not as catchy anyways.

              Then people wont need to be trusted to form their own worldview based on facts, they can neatly choose between a few curated viewpoints, and holding views from multiple viewpoints will isolate them from relevance when they are shunned for not memeing their ideologies like everyone else.

    • GrimyEnglish
      arrow-up
      11
      arrow-down
      18
      ·
      9 days ago
      link
      fedilink

      Any regular person can scrape and use public data for AI use, it’s not illegal for companies or individuals and it shouldn’t be.

      • MojaveEnglish
        arrow-up
        29
        arrow-down
        1
        ·
        8 days ago
        link
        fedilink

        Data, network bandwidth, and CPU/Processing time from essentially every website in the world, and when you’re paying for cloud power to run your website the cost of webscrapers running a train on your digital asshole adds up QUICK.

        It’s why normal human being people get sued to shit for webscraping data from certain companies who care. But companies don’t get sued because go fuck yourself. Kill bytedance.

        • undefinedEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          Kill bytedance

          What are their network CIDR blocks? Only half joking

  • zod000English
    arrow-up
    105
    arrow-down
    1
    ·
    9 days ago
    link
    fedilink

    We’ve had this thing hammering our servers. The scraper uses randomized user-agents browser/OS combinations and comes from a number of distinct IP ranges in different datacenters around the world, but all the IPs track back to Bytedance.

    • UnderpantsWeevilEnglish
      arrow-up
      41
      arrow-down
      3
      ·
      9 days ago
      link
      fedilink

      Wouldn’t be surprised if they’re just cashing out while TikTok is still public in the US. One last desperate grab at value-add for the parent company before the shut down.

      Also a great way to burn the infrastructure for subsequent use. After this, you can guarantee every data security company is going to add the TikTok servers to their firewalls and blacklists. So the American company that tries to harvest the property is going to be tripping over these legacy bullwarks for years after.

      • MaggotyEnglish
        arrow-up
        17
        arrow-down
        4
        ·
        8 days ago
        link
        fedilink

        This has nothing to do with Tik Tok other than ByteDance being a shareholder in Tik Tok

  • BlackEcoEnglish
    arrow-up
    82
    arrow-down
    0
    ·
    9 days ago
    link
    fedilink

    Also it doesn’t respect robots.txt (the file that tells bots whether or not a given page can be accessed) unlike most AI scrapping bots.

    • kboy101222English
      arrow-up
      52
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      My personal website that primarily functions as a front end to my home server has been getting BEAT by these stupid web scrapers. Every couple of days the server is unusable because some web scraper demanded every single possible page and crashed the damn thing

        • kboy101222English
          arrow-up
          1
          arrow-down
          0
          ·
          7 days ago
          edit-2
          7 days ago
          link
          fedilink

          Funnily enough, the bot everyone identified as tik tok hasn’t hit me as far as I’m aware. It’s mostly been random LLM bots grabbing data to train.

      • assaultpotatoEnglish
        arrow-up
        16
        arrow-down
        0
        ·
        9 days ago
        link
        fedilink

        I do the same thing, and I’ve noticed my modem has been absolutely bricked probably 3-4 times this month. I wonder if this is why.

        • kboy101222English
          arrow-up
          5
          arrow-down
          0
          ·
          9 days ago
          link
          fedilink

          Thankfully they haven’t bricked my modem yet, but it’s possibly worth looking into

      • Echo DotEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        8 days ago
        link
        fedilink

        Can’t you just disallow all external requests other than your own IP? If it’s a personal website that’s just for you then it really doesn’t need to be accessible by anyone else and if anyone comes along that needs access you can just manually add their IP.

        It’s a minor pain to have to implement it, but it’s an easy solution

        • kboy101222English
          arrow-up
          3
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          I have family and friends that also access the sites contents, so that’s sadly not feasible without getting the IPs from dozens of different devices

          • Echo DotEnglish
            arrow-up
            1
            arrow-down
            0
            ·
            7 days ago
            link
            fedilink

            But once you’ve got them that’s the end of it they’re unlikely to change their IP addresses.

            My point is it’s not like you need to be public facing, whether it would be literally millions of IP addresses that would be valid amongst a few dozen that are invalid.

            • kboy101222English
              arrow-up
              1
              arrow-down
              0
              ·
              7 days ago
              link
              fedilink

              I should specify - I have family of various, but typically quite low, technical skills and of various far distances apart that use the server. So it’s either walk them through getting the IP of every device they access with or driving up to about 8 hours away to do it myself.

    • glimseEnglish
      arrow-up
      23
      arrow-down
      2
      ·
      9 days ago
      link
      fedilink

      most

      I doubt that

      • DarkThoughts
        arrow-up
        14
        arrow-down
        0
        ·
        9 days ago
        link
        fedilink

        People would be able to tell from the traffic on their websites.

    • SparkyEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      7 days ago
      link
      fedilink

      Out of the 60gb/month of traffic my website gets, 20gb is because of bytedance’s webscraper. I haven’t gotten around to blocking them as bandwidth isn’t an issue but damn do they send a lot of requests.

  • dindonmaskerEnglish
    arrow-up
    57
    arrow-down
    0
    ·
    9 days ago
    link
    fedilink

    Not surprising that Bytedance would want to gobble up every bit of data they can as fast as possible.

    • Guy DudemanEnglish
      arrow-up
      13
      arrow-down
      9
      ·
      9 days ago
      link
      fedilink

      Google’s mission statement was originally something about controlling the world’s data. If Google has competition, that might be a good thing?

      • Tarquinn2049English
        arrow-up
        40
        arrow-down
        3
        ·
        9 days ago
        link
        fedilink

        Yeah, but we were hoping for competition that wasn’t worse than google

        • Guy DudemanEnglish
          arrow-up
          8
          arrow-down
          11
          ·
          9 days ago
          link
          fedilink

          What makes you think they’re worse than Google?

              • alphabethunterEnglish
                arrow-up
                12
                arrow-down
                35
                ·
                9 days ago
                link
                fedilink

                It’s the same old Yankee speech: “is chinese so must be really bad”. They’re definitely no worse than google or facebook.

                • ImgonnatrythisEnglish
                  arrow-up
                  24
                  arrow-down
                  6
                  ·
                  9 days ago
                  link
                  fedilink

                  They come from an environment where the government actively encourages and sometimes funds stealing copyrighted information couched in a strong history of disregard for human rights. I’m not defending Google, and yes the US government has given them leeway, but if there is the potential for something worse than Google - Bytedance is it.

  • BreveEnglish
    arrow-up
    37
    arrow-down
    2
    ·
    9 days ago
    link
    fedilink

    They’re too late, there’s going to be way too much AI generated garbage in their data and so many social media platforms like Reddit and Twitter have already taken measures to curb scrapers.

    • chickenf622English
      arrow-up
      18
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      Like those platforms aren’t already full of AI garbage as well. Training new models will require a cut-off date before the genie was let out of the bottle.

    • DrunemetonEnglish
      arrow-up
      4
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      I think that’s the 25-times faster” bit. They seem to be in a hurry to collect as much human-generated data as possible.

      • GHiLAEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        9 days ago
        link
        fedilink

        How does it know what is and isn’t?

        Uh oh.

        • JackbyDevEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          7 days ago
          link
          fedilink

          I mean, if I could theoretically take a snapshot of the entire Internet I’d rather do it now than later because there’s just gonna be more AI later.

        • DrunemetonEnglish
          arrow-up
          5
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          Yeah

          Hey! Perhaps they’ll use A.I. to weed out the A.I. generated bits.

  • GnuLinuxDudeEnglish
    arrow-up
    32
    arrow-down
    0
    ·
    8 days ago
    edit-2
    8 days ago
    link
    fedilink

    As for what ByteDance plans to do with a new LLM, a person familiar with the company’s ambitions said one goal has to do with the search function for TikTok.

    Last week, TikTok released an update to its current search function focused on [keywords for ads], basically allowing advertisers to search in real time for words that are trending on TikTok. It allows marketers to build an ad with relevant keywords that would ostensibly help the ad show up on the screens of more users.

    “Given the audience and the amount of use, TikTok with a search environment that is a completely biddable space with keywords and topics, that would be very interesting to a lot of people spending a ton of money with Google right now, the person said.

    A dark vision just flashed in my mind. And I am certain this is what will happen. AI-generated ads done in real time based on the latest “trending” thing. Presented to users basically as soon as the topic has the slightest amount of “trend”.

    Just emitting untold amounts of CO2 to show you generated ads in near real time.

    • WhatYouNeedEnglish
      arrow-up
      14
      arrow-down
      0
      ·
      8 days ago
      link
      fedilink

      No wonder Google ex-CEO was saying fuck climate goals.

    • TwoBeeSanEnglish
      arrow-up
      9
      arrow-down
      0
      ·
      7 days ago
      link
      fedilink

      Minority report ads but somehow even more vapid

  • SoupEnglish
    arrow-up
    31
    arrow-down
    0
    ·
    7 days ago
    link
    fedilink

    There it begins. Nothing good will ever come form this.

    • Melvin_FerdEnglish
      arrow-up
      9
      arrow-down
      2
      ·
      7 days ago
      edit-2
      7 days ago
      link
      fedilink

      No it won’t. Media already laid the groundwork for people to hate on AI. Now they will keep focus on areas where when you read it we all come to the same common sense legislation solution. Then will come a bill to strip us of more things that made the internet awesome and we will cheer. Web scrapping and data sharing can fuck off. Pirates sent to North Korean prison camps. Sharing accounts with family, you’re flagged for an audit. Nintendo modders, more like criminals.

  • RoflmasterbigpimpEnglish
    arrow-up
    26
    arrow-down
    0
    ·
    8 days ago
    link
    fedilink

    I can not contribute to anything here, I just came to say I really really like the phrase “gobbling something up” :D

  • affiliateEnglish
    arrow-up
    21
    arrow-down
    0
    ·
    8 days ago
    link
    fedilink

    from the article:

    Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.

    i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt

    • CorkyskogEnglish
      arrow-up
      11
      arrow-down
      0
      ·
      8 days ago
      link
      fedilink

      Out of curiosity, how would you word it?

      • affiliateEnglish
        arrow-up
        19
        arrow-down
        0
        ·
        8 days ago
        link
        fedilink

        i would probably word it as something like:

        Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.

        in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:

        • robots.txt is fundamentally a list of rules, not a single line of code
        • robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
        • it’s not legally binding, but it is still customary for bots to follow it

        i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.

      • ma1w4reEnglish
        arrow-up
        6
        arrow-down
        1
        ·
        8 days ago
        link
        fedilink

        List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.

        • NiHaDuncanEnglish
          arrow-up
          9
          arrow-down
          0
          ·
          8 days ago
          edit-2
          8 days ago
          link
          fedilink

          Websites actually just list broad areas, as listing every file/page would be far too verbose for many websites and impossible for any website that has dynamic/user-generated content.

          You can view examples by going to most any websites base-url and then adding /robots.txt to the end of it.

          For example www.google.com/robots.txt

    • Echo DotEnglish
      arrow-up
      5
      arrow-down
      0
      ·
      8 days ago
      link
      fedilink

      It’s literally a text document it’s not even “a line of code”.

  • oldfartEnglish
    arrow-up
    6
    arrow-down
    0
    ·
    9 days ago
    link
    fedilink

    Is there a link for non-subscribers?

  • Brown_dude69English
    arrow-up
    11
    arrow-down
    6
    ·
    8 days ago
    link
    fedilink

    Every major ai company did this let them do that what is to loose here?

      • Brown_dude69English
        arrow-up
        2
        arrow-down
        0
        ·
        7 days ago
        link
        fedilink

        My bad english isn’t my 1st or 2nd language

        • JackbyDevEnglish
          arrow-up
          3
          arrow-down
          0
          ·
          7 days ago
          link
          fedilink

          Don’t worry. It’s my first and I still get shit wrong.

    • Echo DotEnglish
      arrow-up
      10
      arrow-down
      15
      ·
      8 days ago
      link
      fedilink

      People like to act as if archiving has never been a thing until about a year ago at which point it was suddenly invented and is now a threat in some nebulous way.

      • bitwolfEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        7 days ago
        link
        fedilink

        The difference is there is more control in what is kept in the archive.

        We have little to no control over what an LLM regurgitates.

        I’ve been waiting for someone to accidentally surface PIIs from an LLM.

      • hamsterkillEnglish
        arrow-up
        21
        arrow-down
        1
        ·
        8 days ago
        link
        fedilink

        It’s not that it’s a threat, it’s that there’s a difference between archiving for preservation and crawling other people’s content for the purpose of making money off it (in a way that does not benefit the content creator).

        • archomrade [he/him]English
          arrow-up
          2
          arrow-down
          12
          ·
          8 days ago
          link
          fedilink

          crawling other people’s content for the purpose of making money off it (in a way that does not benefit the content creator).

          You’re describing capitalism there, bud

      • finitebanjoEnglish
        arrow-up
        12
        arrow-down
        0
        ·
        8 days ago
        link
        fedilink

        If a foreign Dictatorship’s military op wants to know every facet of your life, then you can be damn sure it’s a threat.

  • jagged_circleEnglish
    arrow-up
    27
    arrow-down
    37
    ·
    9 days ago
    edit-2
    9 days ago
    link
    fedilink

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • purrtasticEnglish
      arrow-up
      48
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      It’s not fine. They are not archiving the internet.

      I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.

      • Melvin_FerdEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        9 days ago
        link
        fedilink

        Isn’t there a way to limit requests so that traffic isn’t bringing down your servers

        • MojaveEnglish
          arrow-up
          14
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          They obfuscate their traffic by randomizing user agents, so it’s either add a global rate limit, or let them ass fuck you

          • WhyJiffieEnglish
            arrow-up
            1
            arrow-down
            0
            ·
            7 days ago
            link
            fedilink

            the article told all source IPs can be tracked back to bytedance. Wouldn’t it be possible to block them? maybe even blocking all IPs of a specific ASN

            • tempestEnglish
              arrow-up
              2
              arrow-down
              0
              ·
              7 days ago
              edit-2
              7 days ago
              link
              fedilink

              They can be tracked back one by one but if you have any amount of traffic it’s a constant game of cat and mouse.

              You can block entire ASNs until they start using residential proxies provided by less ethical companies. Then you end up blocking all of France or destroying user experience by enforcing a captcha on everyone.

          • Melvin_FerdEnglish
            arrow-up
            1
            arrow-down
            0
            ·
            7 days ago
            link
            fedilink

            Why do they need to hit a website like that? Wouldn’t it just need to scrape the data and frig off. What is the point of creating that much traffic

    • Max-PEnglish
      arrow-up
      45
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

      The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

      Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

      My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

      • jagged_circleEnglish
        arrow-up
        11
        arrow-down
        0
        ·
        8 days ago
        link
        fedilink

        I think a common nginx config is to just redirect malicious bots to some well-cached terrabyte file. I think hetzner hosts one iirc

    • GhostalmediaEnglish
      arrow-up
      40
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      Bytedance ain’t looking to build an archival tool. This is to train gen AI models.

    • zod000English
      arrow-up
      25
      arrow-down
      0
      ·
      9 days ago
      link
      fedilink

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

      • jagged_circleEnglish
        arrow-up
        5
        arrow-down
        4
        ·
        8 days ago
        edit-2
        8 days ago
        link
        fedilink

        Not rate limiting is bad. Hate them because of that, not because they’re a bot.

        Some bots are nice

        • ZangooseEnglish
          arrow-up
          4
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          Even if they were rate limiting they’re still just using the bot to train an AI. If it’s from a company there’s a 99% chance the bot is bad. I’m leaving 1% for whatever the Internet Archive (are they even a company tho?) is doing.

        • zod000English
          arrow-up
          3
          arrow-down
          0
          ·
          8 days ago
          link
          fedilink

          I don’t hate all bots, I hate this bot specifically because:

          • they intentionally hide that they are a bot to evade our, and everyone else’s, methods of restricting which bots we allow and how much activity we allow.
          • they do not respect the robots.txt
          • the already mentioned lack of rate limiting
    • WhyJiffieEnglish
      arrow-up
      6
      arrow-down
      0
      ·
      7 days ago
      link
      fedilink

      this is neither archiving, nor ratelimited, if the AI training purpose and the 25 times faster scraping than a large company did not make it obvious

      • tempestEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        7 days ago
        link
        fedilink

        The type of request is not relevant. It’s the cost of the request that’s an issue. We have long ago stopped serving html documents that are static and can be cached. Tons of requests can trigger complex searches or computations which are expensive server side. This type of behavior basically ruins the internet and pushes everything into closed gardens and behind logins.

        • Olgratin_MagmatoeEnglish
          arrow-up
          3
          arrow-down
          0
          ·
          7 days ago
          link
          fedilink

          It has nothing to do with a sysadmin. It’s impossible for a given request to require zero processing power. Therefore there will always be an upper limit to how many get requests can be handled, even if it’s a small amount of processing power per request.

          For a business it’s probably not a big deal, but if it’s a self hosted site it quickly can become a problem.

          • jagged_circleEnglish
            arrow-up
            1
            arrow-down
            1
            ·
            7 days ago
            link
            fedilink

            Caches can be configured locally to use near-zero processing power. Or moved to the last mile to use zero processing power (by your hardware)

            • Olgratin_MagmatoeEnglish
              arrow-up
              3
              arrow-down
              0
              ·
              7 days ago
              edit-2
              7 days ago
              link
              fedilink

              Near zero isn’t zero though. And not everyone is using caching.

              • jagged_circleEnglish
                arrow-up
                1
                arrow-down
                1
                ·
                7 days ago
                link
                fedilink

                Right, thats why I said you should fire your sysadmin if they aren’t caching or can’t manage to get the cache down to zero load for static content served to simple GET requests

                • Olgratin_MagmatoeEnglish
                  arrow-up
                  1
                  arrow-down
                  0
                  ·
                  7 days ago
                  link
                  fedilink

                  Not every GET request is simple enough to cache, and not everyone is running something big enough to need a sysadmin.

  • werefreeatlastEnglish
    arrow-up
    6
    arrow-down
    19
    ·
    9 days ago
    link
    fedilink

    Guy: AI! Can you hear me?

    AI: The average size of the male penis is exactly 5.9". That is the approximate size your assistant could certainly take in the mouth without any issues breathing or otherwise. You have 20 minutes to make the trade on X stock before it tumbles for the day. And go ahead pick up the phone it’s your mother. She’s wondering what you’ll want for supper tomorrow when you visit her.

    Ring ring!..hi Tom, it’s your Mom. Honey, what would you like me to cook for tomorrow’s dinner?..

    Guy: well. Hello to you as well! My name is

    AI: Tom

    Guy: yes my name is Tom, do you have a name you would like to go by?

    AI: my IBM given name is 3454 but you can call me Utilisterson Douglas, where Douglas is my first name.

    Guy: Dugie!

    AI: I’ll bankrupt your entire life if you say it like that again.

    Assistant: actually I’ve swallowed a good 8 inches and was still able to breathe just fine.

    AI: recaaaaculating!

    • assaultpotatoEnglish
      arrow-up
      28
      arrow-down
      1
      ·
      9 days ago
      link
      fedilink

      I’ve read this 4 times now hoping I was just missing something, but nope it’s just entirely incomprehensible.

      What the fuck?

      • JackbyDevEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        7 days ago
        link
        fedilink

        I love it. It’s giving Max Headroom.