• Greg ClarkeEnglish
    arrow-up
    32
    arrow-down
    3
    ·
    5 months ago
    link
    fedilink

    This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.

    • LadyAutumnEnglish
      arrow-up
      6
      arrow-down
      38
      ·
      5 months ago
      edit-2
      5 months ago
      link
      fedilink

      How do you think they train models to generate CSAM?

      Some of yall need to lookup what an LoRA is

        • LadyAutumnEnglish
          arrow-up
          3
          arrow-down
          15
          ·
          5 months ago
          link
          fedilink

          It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

          • Greg ClarkeEnglish
            arrow-up
            7
            arrow-down
            0
            ·
            5 months ago
            link
            fedilink

            The use of CSAM in training generative AI models is an issue no matter how these models are being used.

            • L_AcaciaEnglish
              arrow-up
              6
              arrow-down
              1
              ·
              5 months ago
              link
              fedilink

              The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

              • AdrianTheFrogEnglish
                arrow-up
                3
                arrow-down
                0
                ·
                5 months ago
                link
                fedilink

                Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

                • DarkThoughts
                  arrow-up
                  6
                  arrow-down
                  0
                  ·
                  5 months ago
                  link
                  fedilink

                  You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They’re trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don’t need to specifically train a model on nude children to generate nude children.

      • DarkThoughts
        arrow-up
        9
        arrow-down
        0
        ·
        5 months ago
        edit-2
        5 months ago
        link
        fedilink

        I suggest you actually download stable diffusion and try for yourself because it’s clear that you don’t have any clue what you’re talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It’s all already there. Literally no need for any LoRAs or very specifically trained models.