Surprised pikachu face

  • socialmediaEnglish
    arrow-up
    33
    arrow-down
    1
    ·
    1 month ago
    link
    fedilink

    Just want to point out that it absolutely is possible to train an AI that will keep track of its sources for inspiration and can attribute those when it makes a response.

    Meaning creators could be compensated for their parts of AI generated stuff, if anyone wanted to.

    • blorpEnglish
      arrow-up
      9
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      Doesn’t Phind do this already? I haven’t used it much but I remember it showing its sources for answers of code-related stuff

      • bluewingEnglish
        arrow-up
        6
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        I use Phind solving computer problems. It does cite the sources it uses. At least for distro and general Linux issues. So far, it’s been a very good resource when I’ve needed it.

    • TrantariusEnglish
      arrow-up
      6
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      Other than citing the entire training data set, how would this be possible?

      • UnderpantsWeevilEnglish
        arrow-up
        5
        arrow-down
        4
        ·
        1 month ago
        link
        fedilink

        The entire training set isn’t used in each permutation. Your keywords are building the samples based on metadata tags tied back to the original images.

        If you ask for “Iron Man in a cowboy hat”, the toolset will reach for some catalog of Iron Man images and some catalog of cowboy hat images and some catalog of person-in-cowboy-hat images, when looking for a basis of comparison as it renders the image.

        These would be the images attributed to the output.

        • TrantariusEnglish
          arrow-up
          2
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          Do you have a source for this? This sounds like fine-tuning a model, which doesn’t prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don’t think that’s how any of these models work.

    • mm_maybeEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      30 days ago
      link
      fedilink

      I think that there are some people working on this, and a few groups that have claimed to do it, but I’m not aware of any that actually meet the description you gave. Can you cite a paper or give a link of some sort?