Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • kromemEnglish
    arrow-up
    4
    arrow-down
    2
    ·
    8 months ago
    edit-2
    8 months ago
    link
    fedilink

    Lemmy hasn’t met a pitchfork it doesn’t pick up.

    You are correct. The most cited researcher in the space agrees with you. There’s been a half dozen papers over the past year replicating the finding that LLMs generate world models from the training data.

    But that doesn’t matter. People love their confirmation bias.

    Just look at how many people think it only predicts what word comes next, thinking it’s a Markov chain and completely unaware of how self-attention works in transformers.

    The wisdom of the crowd is often idiocy.

    • FooBarringtonEnglish
      arrow-up
      3
      arrow-down
      2
      ·
      8 months ago
      link
      fedilink

      Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don’t have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don’t matter as long as you get to be angry about stupid AIs.

      It’s hard to tell a difference between these people and Trump supporters sometimes.

      • kromemEnglish
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago
        edit-2
        8 months ago
        link
        fedilink

        It’s hard to tell a difference between these people and Trump supporters sometimes.

        To me it feels a lot like when I was arguing against antivaxxers.

        The same pattern of linking and explaining research but having it dismissed because it doesn’t line up with their gut feelings and whatever they read when “doing their own research” guided by that very confirmation bias.

        The field is moving faster than any I’ve seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

        A lot of outstanding assumptions have been proven wrong.

        It’s a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

        • FooBarringtonEnglish
          arrow-up
          3
          arrow-down
          1
          ·
          8 months ago
          link
          fedilink

          Exactly. They have very strong feelings that they are right, and won’t be moved - not by arguments, research, evidence or anything else.

          Just look at the guy telling me “they can’t reason!. I asked whether they’d accept they are wrong if I provide a counter example, and they literally can’t say yes. Their world view won’t allow it. If I’m sure I’m right that no counter examples exist to my point, I’d gladly say “yes, a counter example would sway me”.

          • GiveMemesEnglish
            arrow-up
            1
            arrow-down
            2
            ·
            8 months ago
            link
            fedilink

            Yall actually have any research to share or just gonna talk about it?

        • GiveMemesEnglish
          arrow-up
          3
          arrow-down
          4
          ·
          8 months ago
          link
          fedilink

          Yall actually have any research to share or just gonna talk about it?

            • GiveMemesEnglish
              arrow-up
              2
              arrow-down
              0
              ·
              8 months ago
              link
              fedilink

              Jsyk I can’t see that comment from your link.

              • FooBarringtonEnglish
                arrow-up
                1
                arrow-down
                0
                ·
                8 months ago
                link
                fedilink

                Weird, works fine for me. It’s their response to the comment in this thread with this content:

                I think you might be confusing intelligence with memory. Memory is compressed knowledge, intelligence is the ability to decompress and interpret that knowledge.