• hokage
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago
    link
    fedilink

    What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.

    • P03 LockeEnglish
      arrow-up
      4
      arrow-down
      0
      ·
      1 year ago
      link
      fedilink

      When you get articles like this, the first thing you should ask is “Who the fuck is Firstpost?

      • Altima NEOEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        1 year ago
        link
        fedilink

        Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else

        • kakesEnglish
          arrow-up
          2
          arrow-down
          0
          ·
          1 year ago
          link
          fedilink

          The difference is in who gets the ad money.

    • Wats0nsEnglish
      arrow-up
      3
      arrow-down
      0
      ·
      1 year ago
      link
      fedilink

      Openai biggest spending is infrastructure, Whis is rented from Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

      • fidodoEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        1 year ago
        link
        fedilink

        MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.

    • monobotEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      1 year ago
      link
      fedilink

      While title is click bite, they do say right at the beginning:

      *Right now, it is pulling through only because of Microsoft’s $10 billion funding *

      Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

  • merthyr1831English
    arrow-up
    1
    arrow-down
    0
    ·
    1 year ago
    link
    fedilink

    I mean apart from the fact it’s not sourced or whatever, it’s standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

    Once everyone’s using your product over competitors who couldn’t afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you’re the biggest player in the market.

    It’s just Uber’s business model.

  • simpleEnglish
    arrow-up
    1
    arrow-down
    0
    ·
    1 year ago
    link
    fedilink

    There’s no way Microsoft is going to let it go bankrupt.

  • donuts
    arrow-up
    0
    arrow-down
    0
    ·
    1 year ago
    link
    fedilink

    They’re gonna be in even bigger trouble when it’s determined that AI training, especially for content generation, is not fair use and they have to pay each and every person whose data they’ve used.

    • MeowoemEnglish
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago
      link
      fedilink

      AI is too useful and too powerful that none of the major players in world politics are going to put serious restrictions on it, do you really think they’re going to risk Chinese and Russian ai giving them the economic and scientific edge?

      Yes selfish people want to stop progress which could help everyone in the world hey access to education, medical care, legal advice, social care, etc because they think they’re owed twenty cents for the text they wrote but thankfully society isn’t going to take them seriously, there are money grubbers and antisocial people everywhere who are looking for any chance to ruin things that could help others and we ignore those people.

  • ElderosEnglish
    arrow-up
    0
    arrow-down
    0
    ·
    1 year ago
    link
    fedilink

    That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

    • glockenspielEnglish
      arrow-up
      0
      arrow-down
      0
      ·
      1 year ago
      link
      fedilink

      I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

      What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.

      I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

      A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

      • Windex007English
        arrow-up
        1
        arrow-down
        0
        ·
        1 year ago
        link
        fedilink

        As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.

        It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.

        Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

        ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

        • JackbyDevEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          1 year ago
          link
          fedilink

          Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

          • Windex007English
            arrow-up
            1
            arrow-down
            0
            ·
            1 year ago
            link
            fedilink

            I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.

            I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.

            Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.

            If you have like Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.

            The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.

            • JackbyDevEnglish
              arrow-up
              1
              arrow-down
              0
              ·
              1 year ago
              link
              fedilink

              I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.

        • flameguy21English
          arrow-up
          1
          arrow-down
          0
          ·
          1 year ago
          link
          fedilink

          Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.

        • eweEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          1 year ago
          link
          fedilink

          Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

          Not me, but my boss would wait a minute