• abrinaelEnglish
    arrow-up
    7
    arrow-down
    8
    ·
    5 months ago
    edit-2
    5 months ago
    link
    fedilink

    What I don’t like about it is that it makes it sound more benign than it is. Which also points to who decided to use that term - AI promoters/proponents.

    Edit: it’s like all of the bills/acts in congress where they name them something like “The Protect Children Online Act” and you ask, “well, what does it do? And they say something like, “it lets local police read all of your messages so they can look for any dangers to children.

    • zalgotextEnglish
      arrow-up
      16
      arrow-down
      0
      ·
      5 months ago
      link
      fedilink

      The term “hallucination” has been used for years in AI/ML academia. I reading about AI hallucinations ten years ago when I was in college. The term was originally coined by researchers and mathematicians, not the snake oil salesman pushing AI today.

      • abrinaelEnglish
        arrow-up
        6
        arrow-down
        1
        ·
        5 months ago
        link
        fedilink

        I had no idea about this. I studied neural networks briefly over 10 years ago, but hadn’t heard the term until the last year or two.

        • KeenFlameEnglish
          arrow-up
          3
          arrow-down
          3
          ·
          5 months ago
          link
          fedilink

          We were talking about when it was coined, not when you heard it first

    • WirlockeEnglish
      arrow-up
      7
      arrow-down
      0
      ·
      5 months ago
      edit-2
      5 months ago
      link
      fedilink

      In terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn’t downplay what’s happening because it’s generally accepted that having a source of information hallucinate is bad.

      I feel like the alternatives would downplay the problem. A “glitch” is generic and common, “lying” is just inaccurate since that implies intent to deceive, and just being “wrong” doesn’t get across how elaborately wrong an LLM can be.

      Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.

      Ultmately all the sciences are full of analogous names to make conversations easier, it’s not always marketing. No different than when physicists say particles have “spin” or “color” or that spacetime is a “fabric” or [insert entirety of String theory]

      • abrinaelEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        5 months ago
        edit-2
        5 months ago
        link
        fedilink

        After thinking about it more, I think the main issue I have with it is that it sort of anthropomorphises the AI, which is more of an issue in applications where you’re trying to convince the consumer that the product is actually intelligent. (Edit: in the human sense of intelligence rather than what we’ve seen associated with technology in the past.)

        You may be right that people could have a negative view of the word “hallucination”. I don’t personally think of schizophrenia, but I don’t know what the majority think of when they hear the word.

        • Knock_Knock_Lemmy_InEnglish
          arrow-up
          5
          arrow-down
          0
          ·
          5 months ago
          link
          fedilink

          You could invent a new word, but that doesn’t help people understand the problem.

          You are looking for an existing word that describes providing unintentionally incorrect thoughts but is totally unrelated to humans. I suspect that word doesn’t exist. Every thinking word gets anthropomorphized.