• echo64English
    arrow-up
    53
    arrow-down
    1
    ·
    8 months ago
    link
    fedilink

    Ai actually has huge problems with this. If you feed ai generated data into models, then the new training falls apart extremely quickly. There does not appear to be any good solution for this, the equivalent of ai inbreeding.

    This is the primary reason why most ai data isn’t trained on anything past 2021. The internet is just too full of ai generated data.

    • givesomefucksEnglish
      arrow-up
      30
      arrow-down
      2
      ·
      8 months ago
      edit-2
      8 months ago
      link
      fedilink

      There does not appear to be any good solution for this

      Pay intelligent humans to train AI.

      Like, have grad students talk to it in their area of expertise.

      But that’s expensive, so capitalist companies will always take the cheaper/shittier routes.

      So it’s not there’s no solution, there’s just no profitable solution. Which is why innovation should never solely be in the hands of people whose only concern is profits

      • SinningStromgaldEnglish
        arrow-up
        9
        arrow-down
        1
        ·
        8 months ago
        link
        fedilink

        OR they could just scrape info from the “aska____” subreddits and hope and pray it’s all good. Plus that is like 1/100th the work.

        The racism, homophobia and conspiracy levels of AI are going to rise significantly scraping Reddit.

        • givesomefucksEnglish
          arrow-up
          9
          arrow-down
          1
          ·
          8 months ago
          link
          fedilink

          Even that would be a huge improvement.

          Just have a human decide what subs it uses, but they’ll just turn it losse on the whole website

          • RentlarEnglish
            arrow-up
            5
            arrow-down
            0
            ·
            8 months ago
            link
            fedilink

            That reminds me, any AI trained on exclusively Reddit data is going to use lose vs. loose incorrectly. I don’t know why but I spotted that so often there.

    • T156English
      arrow-up
      9
      arrow-down
      0
      ·
      8 months ago
      link
      fedilink

      And unlike with images where it might be possible to embed a watermark to filter out, it’s much harder to pinpoint whether text is AI generated or not, especially if you have bots masquerading as users.

    • UltravioletEnglish
      arrow-up
      6
      arrow-down
      1
      ·
      8 months ago
      link
      fedilink

      This is why LLMs have no future. No matter how much the technology improves, they can never have training data past 2021, which becomes more and more of a problem as time goes on.

      • TimeSquirrel
        arrow-up
        3
        arrow-down
        3
        ·
        8 months ago
        link
        fedilink

        You can have AIs that detect other AIs’ content and can make a decision on whether to incorporate that info or not.

        • skillissuerEnglish
          arrow-up
          4
          arrow-down
          0
          ·
          8 months ago
          link
          fedilink

          can you really trust them in this assessment?

          • TimeSquirrel
            arrow-up
            2
            arrow-down
            0
            ·
            8 months ago
            edit-2
            8 months ago
            link
            fedilink

            Doesn’t look like we’ll have much of a choice. They’re not going back into the bag.
            We definitely need some good AI content filters. Fight fire with fire. They seem to be good at this kind of thing (pattern recognition), way better than any procedural programmed system.

        • echo64English
          arrow-up
          4
          arrow-down
          2
          ·
          8 months ago
          link
          fedilink

          Fun fact. You can’t. Ais are surprisingly bad at distinguishing ai generated things from real things.