• over_clox
    arrow-up
    3
    arrow-down
    9
    ·
    5 months ago
    link
    fedilink

    What it’s able and intended to do is besides the point, if it’s also capable of generating inappropriate material.

    Let me spell it more clearly. AI wouldn’t know what a pussy looked like if it was never exposed to that sort of data set. It wouldn’t know other inappropriate things if it wasn’t exposed to that data set either.

    Do you see where I’m going with this? AI only knows what people allow it to learn

    • FaceDeer
      arrow-up
      10
      arrow-down
      1
      ·
      5 months ago
      link
      fedilink

      You realize that there are perfectly legal photographs of female genitals out there? I’ve heard it’s actually a rather popular photography subject on the Internet.

      Do you see where I’m going with this? AI only knows what people allow it to learn

      Yes, but the point here is that the AI doesn’t need to learn from any actually illegal images. You can train it on perfectly legal images of adults in pornographic situations, and also perfectly legal images of children in non-pornographic situations, and then when you ask it to generate child porn it has all the concepts it needs to generate novel images of child porn for you. The fact that it’s capable of that does not in any way imply that the trainers fed it child porn in the training set, or had any intention of it being used in that specific way.

      As others have analogized in this thread, if you murder someone with a hammer that doesn’t make the people who manufactured the hammer guilty of anything. Hammers are perfectly legal. It’s how you used it that is illegal.

      • over_clox
        arrow-up
        2
        arrow-down
        9
        ·
        5 months ago
        link
        fedilink

        Yes, I get all that, duh. Did you read the original post title? CSAM?

        I thought you could catch a clue when I said inappropriate.

        • FaceDeer
          arrow-up
          8
          arrow-down
          1
          ·
          5 months ago
          link
          fedilink

          Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.

          You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.

          The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.

            • FaceDeer
              arrow-up
              5
              arrow-down
              1
              ·
              5 months ago
              link
              fedilink

              3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.

          • over_clox
            arrow-up
            2
            arrow-down
            5
            ·
            5 months ago
            link
            fedilink

            Alright, well let’s play an innocent hypothetical here.

            Let’s pretend you only know some magic word model (doesn’t exist without thousands or millions of images by the way).

            But anyways, let’s say you’re the AI. Now, with no vision of the world, what would you, as an AI, say if I asked you about how crescent wrenches and channel locks reproduced?

            Now try the same hypothetical question again. This time, you actually have a genuine set of images of clean new tools, plus information that tools can’t reproduce.

            And now let’s go to the modern day. Where AI has zillions of images of rusty redneck toolboxes, and a bunch of janky dialogue

            After all that, then where do crowbars come from?

            AI is just as dumb as the people using it.