• NeshuraEnglish
    arrow-up
    31
    arrow-down
    5
    ·
    4 days ago
    edit-2
    4 days ago
    link
    fedilink

    Last I checked (which was a while ago) AI still can’t pass the most basic of tasks such as “show me a blank image”/“show me a pure white image”. the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I’m willing to debate the potentials of AI again once they manage to do that without those “benchmarks” getting special attention in the training data.

      • WombleEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        2 days ago
        edit-2
        1 day ago
        link
        fedilink

        Thats actually quite interesting, you could make the argument that that is an image of “a pure white completely flat object with zero content”, its just taken your description of what you want the image to be and given an image of an object that satisfies that.

    • TechnusEnglish
      arrow-up
      21
      arrow-down
      2
      ·
      4 days ago
      link
      fedilink

      Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

      That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.

      • rottingleafEnglish
        arrow-up
        6
        arrow-down
        0
        ·
        3 days ago
        edit-2
        3 days ago
        link
        fedilink

        Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.

        These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.

        I just want to see the moment when it all bursts. I’ll be so gleeful. I’ll go and buy an IPA and will laugh in every place in the Internet I’ll see this discussed.

    • gr3qEnglish
      arrow-up
      5
      arrow-down
      0
      ·
      3 days ago
      edit-2
      3 days ago
      link
      fedilink

      I tested chatgpt, it needed some nagging but it could do it. Needed the size, blank and white keywords.

      Obviously a lot harder than it should be, but not impossible.

    • rottingleafEnglish
      arrow-up
      5
      arrow-down
      2
      ·
      3 days ago
      link
      fedilink

      Because it’s not AI, it’s sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

      Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.

      Their AI will be possible when it’ll be able to want something and decide something, with that moment based on entropy and not extrapolation.

      • InternetPersonEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        2 days ago
        link
        fedilink

        Artificial intelligence, like any intelligence, has goals and priorities

        No. Intelligence does not necessitate goals. You are able to understand math, letters, words, meaning of those without pursuing a specific goal.

        Because it’s not AI, it’s sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

        And our brains work in a similar way.

        • rottingleafEnglish
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago
          link
          fedilink

          Our brains work in various ways. Somewhere in there a system similar to those AI's exists, I agree. It’s just only one part. Artificial dicks are not the same thing as artificial humans

    • InternetPersonEnglish
      arrow-up
      2
      arrow-down
      5
      ·
      2 days ago
      link
      fedilink

      I’m willing to debate the potentials of AI again once they manage to do that without those “benchmarks” getting special attention in the training data.

      You sound like those guys who doomed AI, because a single neuron wasn’t able to solve the XOR problem. Guess what, build a network out of neurons and the problem is solved.

      What potentials are you talking about? The potentials are tremendous. There are a plethora of algorithms, theoretic knowledge and practical applications where AI really shines and proves its potential. Just because LLMs currently still lack several capabilities, this doesn’t mean that some future developments can’t improve on that and this by maybe even not being a contemporary LLM. LLMs are just one thing in the wide field of AI. They can do really cool stuff. This points towards further potential in that area. And if it’s not LLMs, then possibly other types of AI architectures.