Surprised pikachu face

  • T156English
    arrow-up
    10
    arrow-down
    1
    ·
    1 month ago
    link
    fedilink

    At the same time, the trouble with local LLMs is that they’re very resource heavy. Your average household computer isn’t going to be able to run one with much usability or speed.

    • floquantEnglish
      arrow-up
      23
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      Which, you know, is fine. Maybe if people had an idea of how much power is required to run them, they would think twice before using a gigawatt to output a poem about farts, and perhaps even wonder how OpenAI can offer that for free. Btw, a 7b model should run ok on any PC with at least 16GB of RAM and a modern processor/GPU.

    • RmDebArc_5OPEnglish
      arrow-up
      3
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output

    • TriflingToadEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      it’s a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there’s different models to play around with, some are faster but worse and some are smarter but slower