It acknowledged ‘inaccuracies’ in historical prompts.

  • BagginsEnglish
    arrow-up
    0
    arrow-down
    0
    ·
    8 months ago
    edit-2
    8 months ago
    link
    fedilink

    Dressed as a German style soldier, not even SS or Gestapo, does not automatically make the images Nazi. Only one is in German uniform.

    Just shows AI isn’t as wonderful as some would make out, but unfortunately some people will accept that it’s factually correct. That’s where the danger lies.

    • Lath
      arrow-up
      0
      arrow-down
      0
      ·
      8 months ago
      link
      fedilink

      And the blame lies entirely on the companies themselves. False advertising to attract investors and placate shareholders that means lying through their teeth to make a fuss and then backtracking through language manipulation.
      They created the media that exists today. The resulting mess is their responsibility.

    • Skull giver
      arrow-up
      0
      arrow-down
      0
      ·
      8 months ago
      link
      fedilink

      To be fair, it tried very hard to generate nazis donned in swastikas and what not, but it failed at that too.

  • TimeSquirrel
    arrow-up
    0
    arrow-down
    0
    ·
    8 months ago
    link
    fedilink

    Google has apologized for what it describes as “inaccuracies in some historical image generation depictions”

    Why the hell should anyone apologize for idiots taking AI generated info as fact?

  • bedrooms
    arrow-up
    0
    arrow-down
    0
    ·
    8 months ago
    link
    fedilink

    AIs are inaccurate. Conservatives are stupid.

  • admiralteal
    arrow-up
    0
    arrow-down
    0
    ·
    8 months ago
    link
    fedilink

    There’s two different fuckups happening here, in my opinion.

    1. Asking an generative model to do something factually accurate is an incorrect use of the tech. That’s not what they are for. You cannot expect it to give you accurate images of historical figures unless you ALSO tell it accurately what that historical figure should be.
    2. To wit: if you want it to only generate images of white people, tell it so. This tech clearly has guardrails on it because the developers KNOW it has a highly biased training dataset that they are trying to counter. They are correct to acknowledge and try to balance this biased training dataset. That isn’t a scandal, it is exactly what they should be doing.