• CombatWombat1212English
    arrow-up
    16
    arrow-down
    0
    ·
    2 hours ago
    link
    fedilink

    So do I every time I ask it a slightly complicated programming question

  • whotookkarlEnglish
    arrow-up
    10
    arrow-down
    1
    ·
    2 hours ago
    edit-2
    2 hours ago
    link
    fedilink

    Here’s the cycle we’ve gone through multiple times and are currently in:

    AI winter (low research funding) -> incremental scientific advancement -> breakthrough for new capabilities from multiple incremental advancements to the scientific models over time building on each other (expert systems, LLMs, neutral networks, etc) -> engineering creates new tech products/frameworks/services based on new science -> hype for new tech creates sales and economic activity, research funding, subsidies etc -> (for LLMs we’re here) people become familiar with new tech capabilities and limitations through use -> hype spending bubble bursts when overspend doesn’t keep up with infinite money line goes up or new research breakthroughs -> AI winter -> etc

  • anon_8675309English
    arrow-up
    58
    arrow-down
    1
    ·
    6 hours ago
    link
    fedilink

    Did anyone believe they had the ability to reason?

    • SemperverusEnglish
      arrow-up
      5
      arrow-down
      10
      ·
      2 hours ago
      link
      fedilink

      I still believe they have the ability to reason to a very limited capacity. Everyone says that they’re just very sophisticated parrots, but there is something emergent going on. These AIs need to have a world-model inside of themselves to be able to parrot things as correctly as they currently do (yes, including the hallucinations and the incorrect answers). Sure they are using tokens instead of real dictionary words, which comes with things like the strawberry problem, but just because they are not nearly as sophisticated as us doesnt mean there is no reasoning happening.

      We are not special.

      • galanthusEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        8 mins ago
        link
        fedilink

        If the only thing you feed an AI is words, then how would it possibly understand what these words mean if it does not have access to the things the words are referring to?

        If it does not know the meaning of words, then what can it do but find patterns in the ways they are used?

        This is a shitpost.

        We are special, I am in any case.

      • trololololEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        2 hours ago
        link
        fedilink

        What’s the strawberry problem? Does it think it’s a berry? I wonder why

        • xthexderEnglish
          arrow-up
          7
          arrow-down
          0
          ·
          2 hours ago
          edit-2
          2 hours ago
          link
          fedilink

          I think the strawberry problem is to ask it how many R’s are in strawberry. Current AI gets it wrong almost every time.

        • tempestEnglish
          arrow-up
          5
          arrow-down
          0
          ·
          2 hours ago
          link
          fedilink

          Ask an LLM how many Rs there are in strawberry

  • N0bodyEnglish
    arrow-up
    41
    arrow-down
    0
    ·
    6 hours ago
    link
    fedilink

    The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding “seemingly relevant but ultimately inconsequential statements” to the questions

    Good thing they’re being trained on random posts and comments on the internet, which are known for being succinct and accurate.

    • blind3rdeyeEnglish
      arrow-up
      13
      arrow-down
      0
      ·
      3 hours ago
      link
      fedilink

      Yeah, especially given that so many popular vegetables are members of the brassica genus

      • MoogleMaestroEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        2 hours ago
        link
        fedilink

        Absolutely. It would be a shame if AI didn’t know that the common maple tree is actually placed in the family cannabaceae.

  • WrenFeathersEnglish
    arrow-up
    13
    arrow-down
    4
    ·
    5 hours ago
    link
    fedilink

    Someone needs to pull the plug on all of that stuff.

  • emeraldEnglish
    arrow-up
    37
    arrow-down
    3
    ·
    8 hours ago
    link
    fedilink

    statistical engine suggesting words that sound like they’d probably be correct is bad at reasoning

    How can this be??

    • SiegfriedEnglish
      arrow-up
      19
      arrow-down
      2
      ·
      7 hours ago
      link
      fedilink

      I would say that if anything, LLMs are showing cracks in our way of reasoning.

      • MoogleMaestroEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        2 hours ago
        link
        fedilink

        Or the problem with tech billionaires selling “magic solutions” to problems that don’t actually exist. Or how people are too gullible in the modern internet to understand when they’re being sold snake oil in the form of “technological advancement” when it’s actually just repackaged plagiarized material.

      • aestheleteEnglish
        arrow-up
        4
        arrow-down
        0
        ·
        3 hours ago
        edit-2
        3 hours ago
        link
        fedilink

        antianticipatable!

  • kingthrillgoreEnglish
    arrow-up
    23
    arrow-down
    0
    ·
    9 hours ago
    edit-2
    9 hours ago
    link
    fedilink

    I feel like a draft landed on Tim’s desk a few weeks ago, explains why they suddenly pulled back on OpenAI funding.

    People on the removed superfund birdsite are already saying Apple is missing out on the next revolution.

  • RaoulDookEnglish
    arrow-up
    18
    arrow-down
    1
    ·
    8 hours ago
    link
    fedilink

    I hope this gets circulated enough to reduce the ridiculous amount of investment and energy waste that the ramping-up of AI services has brought. All the companies have just gone way too far off the deep end with this shit that most people don’t even want.

    • thanks_shakey_snakeEnglish
      arrow-up
      12
      arrow-down
      0
      ·
      7 hours ago
      link
      fedilink

      People working with these technologies have known this for quite awhile. It’s nice of Apple’s researchers to formalize it, but nobody is really surprised-- Least of all the companies funnelling traincars of money into the LLM furnace.

      • zbyte64English
        arrow-up
        1
        arrow-down
        0
        ·
        51 mins ago
        link
        fedilink

        If they know about this then they aren’t thinking of the security implications

  • sircacEnglish
    arrow-up
    15
    arrow-down
    2
    ·
    8 hours ago
    link
    fedilink

    They predict, not reason

      • jabathekekEnglish
        arrow-up
        6
        arrow-down
        0
        ·
        3 hours ago
        link
        fedilink

        *starts sweating

        Look at that subtle pixel count, the tasteful colouring oh my god, it’s even transparent

    • WhatAmLemmyEnglish
      arrow-up
      67
      arrow-down
      5
      ·
      15 hours ago
      link
      fedilink

      The results of this new GSM-Symbolic paper aren’t completely new in the world of AI research. Other recent papers have similarly suggested that LLMs don’t actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

      WTF kind of reporting is this, though? None of this is recent or new at all, like in the slightest. I am shit at math, but have a high level understanding of statistical modeling concepts mostly as of a decade ago, and even I knew this. I recall a stats PHD describing models as “stochastic parrots”; nothing more than probabilistic mimicry. It was obviously no different the instant LLM’s came on the scene. If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive

      • aestheleteEnglish
        arrow-up
        3
        arrow-down
        0
        ·
        3 hours ago
        link
        fedilink

        If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive

        This is outrageous! I mean the pure gall of suggesting journalists should be something other than part of a human centipede!

      • jabathekekEnglish
        arrow-up
        11
        arrow-down
        2
        ·
        8 hours ago
        link
        fedilink

        describing models as “stochastic parrots”

        That is SUCH a good description.

      • no bananaEnglish
        arrow-up
        34
        arrow-down
        2
        ·
        15 hours ago
        link
        fedilink

        It’s written as if they literally expected AI to be self reasoning and not just a mirror of the bullshit that is put into it.

        • Sterile_TechniqueEnglish
          arrow-up
          29
          arrow-down
          3
          ·
          13 hours ago
          link
          fedilink

          Probably because that’s the common expectation due to calling it AI. We’re well past the point of putting the lid back on that can of worms, but we really should have saved that label for y’know intelligence, that’s artificial. People think we’ve made an early version of Halo’s Cortana or Star Trek’s Data, and not just a spellchecker on steroids.

          The day we make actual AI is going to be a really confusing one for humanity.

            • SemperverusEnglish
              arrow-up
              2
              arrow-down
              0
              ·
              2 hours ago
              link
              fedilink

              This problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.

          • FaridEnglish
            arrow-up
            10
            arrow-down
            14
            ·
            13 hours ago
            link
            fedilink

            To say it’s not intelligence is incorrect. It’s still (an inferior kind of) intelligence, humans just put certain expectations into the word. An ant has intelligence. An NPC in a game has intelligence. They are just very basic kinds of intelligence, very simple decision making patterns.

            • aestheleteEnglish
              arrow-up
              3
              arrow-down
              0
              ·
              3 hours ago
              edit-2
              3 hours ago
              link
              fedilink

              To follow rote instructions is not intelligence.

              If following a simple algorithm is intelligence, then the entire field of software engineering has been producing AI since its inception rendering the term even more meaningless than it already is.

              • SemperverusEnglish
                arrow-up
                1
                arrow-down
                0
                ·
                2 hours ago
                edit-2
                2 hours ago
                link
                fedilink

                Its almost as if the word “intelligence” has been vague and semi-meaningless since its inception

                Have we ever had a solid, technical definition of intelligence?

                • aestheleteEnglish
                  arrow-up
                  1
                  arrow-down
                  0
                  ·
                  1 hour ago
                  link
                  fedilink

                  I’m pretty sure dictionaries have an entry for the word, and the basic sense of the term is not covered by writing up a couple of if statements or a loop.

              • FaridEnglish
                arrow-up
                1
                arrow-down
                2
                ·
                2 hours ago
                link
                fedilink

                Opponent players in games have been labeled AI for decades, so yeah, software engineers have been producing AI for a while. If a computer can play a game of chess against you, it has intelligence, a very narrowly scoped intelligence, which is artificial, but intelligence nonetheless.

                • aestheleteEnglish
                  arrow-up
                  1
                  arrow-down
                  0
                  ·
                  1 hour ago
                  edit-2
                  1 hour ago
                  link
                  fedilink

                  https://www.etymonline.com/word/intelligence

                  Simple algorithms are not intelligence. Some modern AI we have comes close to fitting some of these definitions, but simple algorithms do not.

                  We can call things whatever we want, that’s the gift (and the curse) of language. It’s imprecise and only has the meanings we ascribe to it, but you’re the one who started this thread by demanding that “to say it is not intelligence is incorrect” and I’ve still have yet to find a reasonable argument for that claim within this entire thread. Instead all you’ve done is just tried to redefine intelligence to cover nearly everything and then pretended that your (not authoritative) wavy ass definition is the only correct one.

            • kryptoniteEnglish
              arrow-up
              3
              arrow-down
              0
              ·
              5 hours ago
              link
              fedilink

              humans just put certain expectations into the word.

              which is entirely the way words work to convey ideas. If a word is being used to mean something other than the audience understands it to mean, communication has failed.

              By the common definition, it’s not “intelligence”. If some specialized definition is being used, then that needs to be established and generally agreed upon.

              • FaridEnglish
                arrow-up
                1
                arrow-down
                3
                ·
                4 hours ago
                edit-2
                4 hours ago
                link
                fedilink

                I would put it differently. Sometimes words have two meanings, for example a layman’s understanding of it and a specialist’s understanding of the same word, which might mean something adjacent, but still different. For instance, the word “theory” in everyday language often means a guess or speculation, while in science, a “theory” is a well-substantiated explanation based on evidence.

                Similarly, when a cognitive scientist talks about “intelligence”, they might be referring to something quite different from what a layperson understands by the term.

            • AwesomeLowlanderEnglish
              arrow-up
              9
              arrow-down
              1
              ·
              11 hours ago
              link
              fedilink

              An NPC in a game has intelligence

              By what definition of the word? Most dictionaries define it as some variant of ‘the ability to acquire and apply knowledge and skills.

              • FaridEnglish
                arrow-up
                5
                arrow-down
                1
                ·
                10 hours ago
                edit-2
                7 hours ago
                link
                fedilink

                Of course there are various versions of NPCs, some stand and do nothing, others are more complex, they often “adapt” to certain conditions. For example, if an NPC is following the player it might “decide” to switch to running if the distance to the player reaches a certain threshold, decide how to navigate around other dynamic/moving NPCs, etc. In this example, the NPC “acquires” knowledge by polling the distance to the player and applies that “knowledge” by using its internal model to make a decision to walk or run.

                The term “acquiring knowledge” is pretty much as subjective as “intelligence”. In the case of an ant, for example, it can’t really learn anything, at best it has a tiny short-term memory in which it keeps certain most recent decisions, but it surely gets things done, like building colonies.

                For both cases, it’s just a line in the sand.

                • AuliEnglish
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  8 hours ago
                  edit-2
                  8 hours ago
                  link
                  fedilink

                  NPCs do not have any form of intelligence and don’t decide anything. Or is Windows intelligent cause I click an icon and it decides to do something?

      • fluxionEnglish
        arrow-up
        10
        arrow-down
        4
        ·
        14 hours ago
        link
        fedilink

        Clearly this sort of reporting is not prevalent enough given how many people think we have actually come up with something new these last few years and aren’t just throwing shitloads of graphics cards and data at statistical models

  • The Snark UrgeEnglish
    arrow-up
    82
    arrow-down
    2
    ·
    18 hours ago
    edit-2
    18 hours ago
    link
    fedilink

    One time I exposed deep cracks in my calculator’s ability to write words with upside down numbers. I only ever managed to write BOOBS and hELLhOLE.

    LLMs aren’t reasoning. They can do some stuff okay, but they aren’t thinking. Maybe if you had hundreds of them with unique training data all voting on proposals you could get something along the lines of a kind of recognition, but at that point you might as well just simulate cortical columns and try to do Jeff Hawkins’ idea.

  • ChickenstalkerEnglish
    arrow-up
    9
    arrow-down
    6
    ·
    10 hours ago
    edit-2
    10 hours ago
    link
    fedilink

    Are we not flawed too? Does that not makes AIhuman?

    • ContrarianTrailEnglish
      arrow-up
      23
      arrow-down
      0
      ·
      9 hours ago
      link
      fedilink

      How dare you imply that humans just make shit up when they don’t know the truth

      • WldFyreEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        5 hours ago
        link
        fedilink

        Did I misremember something, or is my memory easily influenced by external stimuli? No, the Mandela Effect must be real!

        /s

  • CosmoNovaEnglish
    arrow-up
    38
    arrow-down
    3
    ·
    17 hours ago
    link
    fedilink

    Are you telling me Apple hasn’t seen through the grift and is approaching this with an open mind just to learn how full off bullshit most of the claims from the likes of Altman are? And now they’re sharing their gruesome discoveries with everyone while they’re unveiling them?

    • WhatAmLemmyEnglish
      arrow-up
      47
      arrow-down
      3
      ·
      16 hours ago
      link
      fedilink

      I would argue that Apple Intelligence™️ is evidence they never bought the grift. It’s very focused on tailored models scoped to the specific tasks that AI does well; creative and non-critical tasks like assisting with text processing/transforming, image generation, photo manipulation.

      The Siri integrations seem more like they’re using the LLM to stitch together the API’s that were already exposed between apps (used by shortcuts, etc); each having internal logic and validation that’s entirely programmed (and documented) by humans. They market it as a whole lot more, but they market every new product as some significant milestone for mankind even when it’s a feature that other phones have had for years, but in an iPhone!

    • ContrarianTrailEnglish
      arrow-up
      5
      arrow-down
      5
      ·
      16 hours ago
      link
      fedilink

      What’s an example of a claim Altman has made that you’d consider bullshit?

      • sinceasdfEnglish
        arrow-up
        8
        arrow-down
        1
        ·
        7 hours ago
        link
        fedilink

        The entirety of “open” ai is complete bullshit. They’re no longer even pretending to be nonprofit at all and there is nothing “open” about them since like 2018.

  • LvxferreEnglish
    arrow-up
    30
    arrow-down
    1
    ·
    17 hours ago
    link
    fedilink

    The fun part isn’t even what Apple said - that the emperor is naked - but why it’s doing it. It’s nice bullet against all four of its GAFAM competitors.

    • jherazob
      arrow-up
      25
      arrow-down
      2
      ·
      17 hours ago
      link
      fedilink

      This right here, this isn’t conscientious analysis of tech and intellectual honesty or whatever, it’s a calculated shot at it’s competitors who are desperately trying to prevent the generative AI market house of cards from falling

    • conciselyverboseEnglish
      arrow-up
      16
      arrow-down
      1
      ·
      16 hours ago
      link
      fedilink

      They’re a publicly traded company.

      Their executives need something to point to to be able to push back against pressure to jump on the trend.

    • miskOPEnglish
      arrow-up
      9
      arrow-down
      0
      ·
      13 hours ago
      link
      fedilink

      Given the use cases they were benchmarking I would be very surprised if they were any better.