Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.

Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.

  • SzethFriendOfNimi
    arrow-up
    11
    arrow-down
    2
    ·
    8 months ago
    edit-2
    8 months ago
    link
    fedilink

    There’s a real risk of survivorship bias here. Somebody asking about a car gets that and thinks nothing of it. A privacy minded person, however, would find it odd. And being the kind of person concerned about what could have been the cause considered the prior conversation.

    I’m not saying its an unreasonable concern or technically not feasible. It’s just not how the LLM’s tend to work.

    Id consider it more likely to be a bug, or general inquiries like you said, or that SO had a bunch of documents locally that reference privacy or browsing history (anytime really) that MS could have used as a kind of “here’s more about the person asking you a question”

    • ipkpjersi
      arrow-up
      6
      arrow-down
      1
      ·
      8 months ago
      edit-2
      8 months ago
      link
      fedilink

      A privacy minded person probably wouldn’t use these tools to begin with tbh, they would likely run their own LLM instead.

      • BreakDecksEnglish
        arrow-up
        4
        arrow-down
        1
        ·
        8 months ago
        link
        fedilink

        I guess that’s why OP brought up that they were using someone else’s computer.

        Also, a truly privacy-minded person wouldn’t refuse to use a hosted AI product at all. We generally just make ourselves aware that we don’t have privacy when using it, and never type anything sensitive into it. Also, have you seen what it costs to run a capable LLM?

        • bbuezOP
          arrow-up
          2
          arrow-down
          1
          ·
          8 months ago
          link
          fedilink

          Just don’t pull a samsung

          I’ve just started messing with GPT4all for CPU based language models which can run relatively well on older gaming hardware, and a coral accelerator module for my NVR presence detection with Frigate only cost 30$

      • SzethFriendOfNimi
        arrow-up
        2
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        That’s what I’ve been playing with. Cool stuff even though it’s limited because of my 8GB nvidia card

        • ipkpjersi
          arrow-up
          2
          arrow-down
          0
          ·
          8 months ago
          edit-2
          8 months ago
          link
          fedilink

          It’ll be interesting to see how the technology advances in even 2 or 3 years, even with just an 8GB card, thanks to optimizations etc.