Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • FooBarringtonEnglish
    arrow-up
    4
    arrow-down
    7
    ·
    8 months ago
    link
    fedilink

    I don’t think it’s generally true, because current AI can solve some reasoning tasks very well. But it’s definitely something where they are lacking.

    • rambarooEnglish
      arrow-up
      7
      arrow-down
      4
      ·
      8 months ago
      edit-2
      8 months ago
      link
      fedilink

      It isn’t reasoning about anything. A human did the reasoning at some point, and the LLM’s dataset includes that original information. The LLM is simply matching your prompt to that training data. It’s not doing anything else. It’s not thinking about the question you asked it. It’s a glorified keyword search.

      It’s obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you’re an expert.

      • FooBarringtonEnglish
        arrow-up
        2
        arrow-down
        9
        ·
        8 months ago
        link
        fedilink

        So if I find a single example of an AI doing a reasoning task that’s not in its training material, would you agree that you’re wrong and AI does reason?

        • rambarooEnglish
          arrow-up
          6
          arrow-down
          3
          ·
          8 months ago
          edit-2
          8 months ago
          link
          fedilink

          You won’t find one. LLMs are literally incapable of the kind of reasoning you’re talking about. All of their solutions are based on training data, no matter how “original” your problem might seem.

    • stoyEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      8 months ago
      link
      fedilink

      That’s fair, I have seen AI reason at a low level, but it seems to me that it is lacking higher levels of reasoning and context

      • FooBarringtonEnglish
        arrow-up
        3
        arrow-down
        4
        ·
        8 months ago
        link
        fedilink

        It definitely is lacking for now, but the question is: are these differences in degrees, or fundamental differences? I haven’t seen research suggesting that it’s the latter so far.