• kubica
    arrow-up
    18
    arrow-down
    0
    ·
    9 months ago
    link
    fedilink

    I don’t think they are going to stop storing it somewhere, just stop delivering it.

    • rho50
      arrow-up
      14
      arrow-down
      0
      ·
      9 months ago
      link
      fedilink

      Idk in theory they probably don’t need to store a full copy of the page for indexing, and could move to a more data-efficient format if they do. Also, not serving it means they don’t need to replicate the data to as many serving regions.

      But I’m just speculating here. Don’t know how the indexing/crawling process works at Google’s scale.

      • evatronicEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        9 months ago
        link
        fedilink

        Absolutely. The crawler is doing some rudimentary processing before it ever does any sort of data storage saving. That’s the sort of thing that’s being persisted behind the scenes, and it’s almost certainly both not enough to reconstruct the web page, nor is it (realistically) human-friendly. I was going to say “readable” but it’s probably some bullshit JSON or XML document full of nonsense no one wants to read.

    • pre
      arrow-up
      1
      arrow-down
      0
      ·
      9 months ago
      link
      fedilink

      Seems unlikely they’ll deleted it. If they’re started deleting data that’s quite a change. They might save from bandwidth costs of delivering it to people I suppose.

      Maybe something to do with users filling the AIs from the google cache? Google wanting to ensure only they can train from the google-cache.

      @kubica@kbin.social @Powderhorn@beehaw.org @rho50@lemmy.nz