• UnpluggedFridgeEnglish
    arrow-up
    1
    arrow-down
    4
    ·
    5 months ago
    link
    fedilink

    My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.

    • mindlesscrollyparrotEnglish
      arrow-up
      5
      arrow-down
      0
      ·
      5 months ago
      link
      fedilink

      But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn’t calculate. You or I can perform long multiplication if asked to, but the LLM can’t (ironically, since the hardware it runs on is far better at multiplication than we are).

      • UnpluggedFridgeEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        5 months ago
        link
        fedilink

        We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.