• drwankingsteinEnglish
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago
    link
    fedilink

    I don’t even think this is the case, google does a lot pretty much everywhere. one example is one of the things they are pushing for is locally run AI (gemini, stable diffusion etc.) to run on your gpu via webgpu instead of needing to use cloud services, which is obviously privacy friendly for a myriad of reasons, in fact, we now have multiple implementations of LLMs that run locally in browser on webgpu, and even a stable diffusion implementation (never got it to work though since my most beefy gpu is an arc a380 with 6gb of ram)

    they do other stuff too, but with the recent craze push for AI, I think this is probably the most relevant.

    • LeafletOPEnglish
      arrow-up
      6
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      LLMs are expensive to run, so locally running them saves Google money.

      • drwankingsteinEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        ehh not really, the amount of generated data you can get by snopping on LLM traffic is going to far out weigh the costs of running LLMs

        • LeafletOPEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          1 month ago
          edit-2
          1 month ago
          link
          fedilink

          There’s nothing technical stopping Google from sending the prompt text (and maybe generated results) back to their servers. Only political/social backlash for worsened privacy.

        • elucubra
          arrow-up
          1
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          I doubt that. I’m going to guess that Google is going towards a sort of “P2P AI