• ALostInquirerEnglish
    arrow-up
    5
    arrow-down
    3
    ·
    5 months ago
    link
    fedilink

    perception

    This is the problem I take with this, there’s no perception in this software. It’s faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.

    • xthexderEnglish
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago
      edit-2
      5 months ago
      link
      fedilink

      I have adopted the philosophy that human brains might not be as special as we’ve thought, and that the untrained behavior emerging from LLMs and image generators is so similar to human behaviors that I can’t help but think of it as an underdeveloped and handicapped mind.

      I hypothesis that a human brain, who’s only perception of the world is the training data force fed to it by a computer, would have all the same problems the LLMs do right now.

      To put it another way The line that determines what is sentient and not is getting blurrier and blurrier. LLMs have surpassed the Turing test a few years ago. We’re simulating the level of intelligence of a small animal today.