• OlapEnglish
      arrow-up
      12
      arrow-down
      0
      ·
      2 months ago
      link
      fedilink

      Also, obviously no

  • hendrikEnglish
    arrow-up
    28
    arrow-down
    0
    ·
    2 months ago
    edit-2
    2 months ago
    link
    fedilink

    Tl;Dr: Not anytime soon. It fails even at simple tasks.

    • TechnusEnglish
      arrow-up
      20
      arrow-down
      0
      ·
      2 months ago
      link
      fedilink

      Even if it didn’t, any middle manager who decides to replace their dev team with AI is going to realize pretty quickly that actually writing code is only a small part of the job.

      Won’t stop 'em from trying, of course. But when the laid-off devs get frantic calls from management asking them to come back and fix everything, they’ll be in a good position to negotiate a raise.

      • hendrik
        arrow-up
        2
        arrow-down
        0
        ·
        2 months ago
        link
        fedilink

        If anything. AI could be used to replace managers 😆 I mean lots of management seems to be just pushing paper to me. Ideal to be handled by AI. But I think we still need people to do the real work for quite some time to come. Especially software architecture and coding (complex) stuff ain’t easy. Neither is project management. So I guess even some managers can stay.

        • conciselyverboseEnglish
          arrow-up
          6
          arrow-down
          0
          ·
          2 months ago
          link
          fedilink

          Good management is almost all people skills. It needs to be influenced by domain knowledge for sure, but it’s almost all about people.

          You can probably match trash managers, but you won’t replace remotely competent ones

          • hendrik
            arrow-up
            2
            arrow-down
            0
            ·
            2 months ago
            link
            fedilink

            I’m not even sure about the “people skills” of ChapGPT. Maybe it’s good at that. It always says you have to consider this side but also the other side This is like that, however it might It can weasel itself out of situations (as it did in this video). It makes a big effort to keep a very friendly tone in all circumstances. I think OpenAI has put a lot of effort in ChatGPT having something that resembles a portion of people skills.

            I’ve used those capabilities to rephrase emails that needed to tell some uncomfortable truths but at the same time not scare someone away. And it did a halfway decent job. Better than I could do. And we already see those people skills in use by the companies who replace their first level support with AI. I read somewhere it has a better customer satisfaction rate than a human powered callcenter. It’s good at pacifying people, being nice to them and answering the most common 90% of questions over and over again.

            So I’m not sure what to make of this. I think my point still remains valid. AI (at least ChatGPT) is orders of magnitude better at people skills than at programming. I’m not sure what kind of counterexamples we have Sure, it can’t come to your desk, look you in the eyes and see if you’re happy or need something. Because it doesn’t have any eyes. But at the same time that’s a thing I rarely see with average human managers in big offices, either

            • conciselyverboseEnglish
              arrow-up
              3
              arrow-down
              0
              ·
              2 months ago
              link
              fedilink

              Using flowery language isn’t “people skills”.

              People skills means handling conflict and competing objectives between people fairly and efficiently. It’s a trait based almost entirely on empathy, with a level of ingenuity mixed in, and GPT isn’t anywhere within many orders of magnitude of either. It will be well after it “can code” that it does anything remotely in the neighborhood of the soft skills of being a competent manager.

              • hendrikEnglish
                arrow-up
                1
                arrow-down
                0
                ·
                2 months ago
                edit-2
                2 months ago
                link
                fedilink

                Yeah. I mean the fundamental issue is: ChatGPT isn’t human. It just mimics things. That’s the way it generates text, audio and images. And it’s also the way it handles “empathy”. It mimicks what it’s learned from human interactions during training.

                But in the end: Does it really matter where it comes from and why? I mean the goal of a venture is to produce or achieve something. And that isn’t measured in where it comes from. But in actual output. I don’t want to speculate too much. But despite not having real empathy, it could theoretically achieve the same thing by faking it well enough. And that has been proven in some narrow tasks already. We have customer satisfaction rates. And quite some people saying it helps them with different things. We need to measure that and do some more studies of what’s the actual outcome of replacing something with AI. It could very well be that our perspective is wrong.

                And with that said: I tried roleplaying with AI. It seems to have some theory of mind. Not really of course. But it get’s what I’m hinting at. The desires and behaviour of characters. And so on. Lot’s of models are very agreeable. Some can role play conflict. I think the current capabilities of these kinds of AI are enough to fake some things well enough to get somewhere and be actually useful. I don’t say it has or hasn’t people skills. I think it’s somewhere on the spectrum between the two. I can’t really tell where because I havent yet read any research considering this context.

                And of course there is a big difference between everyday tasks and handling a situation that went completely haywire. We have to factor that in. But in reality there are ways to handle that. For example AI and humans could split up the tasks amongst them. And things can get escalated and humans make difficult decisions. But that could already mean 80% of the labor gets replaced.

                • conciselyverboseEnglish
                  arrow-up
                  2
                  arrow-down
                  0
                  ·
                  2 months ago
                  edit-2
                  2 months ago
                  link
                  fedilink

                  The actual empathy (actually being able to understand people’s perspectives) is how you get to places everyone is OK with. Empathy isn’t language. It’s using the understanding of what people feel and want to find solutions that work well for everyone. Without understanding that perspective at a deep and intuitive level, you don’t solve actual problems. You don’t routinely preempt problems by seeing them before they have a chance of happening and working around them.

                  Actual leadership isn’t stepping in when people are almost at blows and parroting “conflict resolution” at them. It’s understanding who your people are and what they want and putting them in position to succeed.

        • TechnusEnglish
          arrow-up
          3
          arrow-down
          0
          ·
          2 months ago
          link
          fedilink

          Don’t even need an AI. Just teach a parrot to say “let’s circle back on this” and “how many story points is that?

  • JestzerEnglish
    arrow-up
    10
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    The rule of any article asking asking a question in its title is that the answer is always no.

  • flamingo_pinyataEnglish
    arrow-up
    4
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    AI is actually great at typing the code quickly. Once you know exactly what you want. But it’s already the case that if your engineers spend most of their time typing code, you’re doing something wrong. AI or no AI.

    • hendrikEnglish
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago
      edit-2
      2 months ago
      link
      fedilink

      I don’t think so. I’ve had success letting it write boilerplate code. And simple stuff that I could have copied from stack overflow. Or a beginners programming book. With every task from my real life it failed miserably. I’m not sure if I did anything wrong. And it’s been half a year since I last tried. Maybe things have changed substantially in the last few months. But I don’t think so.

      Last thing I tried was some hobby microcontroller code to do some robotics calculations. And ChatGPT didn’t really get what it was supposed to do. And additionally instead of doing the maths, it would just invent some library functions, call them with some input values and imagine the maths to be miraculously be done in the background, by that nonexistent library.

      • flamingo_pinyataEnglish
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago
        edit-2
        2 months ago
        link
        fedilink

        Yes actually, I can imagine it getting microcontroller code wrong. My niche is general backend services. I’ve been using Github copilot a lot and it served me well for generating unit tests. Write test description and it pops out the code with ~ 80% accuracy

        • hendrikEnglish
          arrow-up
          4
          arrow-down
          0
          ·
          2 months ago
          edit-2
          2 months ago
          link
          fedilink

          Sure. There are lots of tedious tasks in a programmers life that don’t require a great amount of intelligence. I suppose writing some comments, docstrings, unit tests, “glue” and boilerplate code that connects things and probably several other things that now escape my mind are good tasks for an AI to assist a proper programmer and make them more effective and get things done faster.

          I just wouldn’t call that programming software. I think assisting with some narrow tasks is more exact.

          Maybe I should try doing some backend stuff. Or give it an API definition and see what it does 😅 Maybe I was a bit blinded by ChatGPT having read the Wikipedia and claiming it understands robotics concepts. But it really doesn’t seem to have any proper knowledge. Same probably applies to engineering and other nighboring fields that might need software.

          • flamingo_pinyataEnglish
            arrow-up
            2
            arrow-down
            0
            ·
            2 months ago
            link
            fedilink

            It might also have to do with specialized vs general models. Copilot is good at generating code but ask it to write prose text and it fails completely. In contrast ChatGPT is awful at code but handles human readable text decently.

  • agamemnonymousEnglish
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago
    link
    fedilink

    I think the obvious answer is “Yes, some, but not all”.

    It’s not going to totally replace human software developers anytime soon, but it certainly has the potential to increase productivity of senior developers and reduce demand for junior developers.

  • TelorandEnglish
    arrow-up
    2
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    Not until it’s better at QA than I am. Good luck teaching a machine how stupid end-users can be.

  • pathiefEnglish
    arrow-up
    1
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    Even if the AI was at the point if outputing exactly what you want correcly, decision makers would still need to be able to specify exactly what they want and need. “I want a website that pops” isn’t going to cut it.

  • A_AEnglish
    arrow-up
    1
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    it will take many years and designs will change considerably before we are there.

  • OmnislashIsACloudAppEnglish
    arrow-up
    1
    arrow-down
    0
    ·
    2 months ago
    link
    fedilink

    people look at this stuff as a yes or no and that’s a major misunderstanding.

    I work in tech, and I can tell you 100% you could not just give a job to AI and call it a day.

    I cannot even imagine this type of response generation ever being capable of that without developing some sort of true intelligence if for no other reason than to turn bad prompts by people who do not understand what they want or what is possible into functional projects.

    that said, but I do believe is possible is that it makes like 5 to 10% of the job a little bit faster. programming is like 10 to 20% writing code and 80 to 90% understanding what that code should be and why it isn’t working that way yet.

    Even the code you get from it is generally wrong but sometimes useful.

    best case scenario I could see right now is not that it replaces jobs but that it makes people more effective, kind of like giving a framer a nail gun instead of a box of nails and a hammer except not that big of an efficiency gain.

    ultimately this might mean you do the job with 8 people instead of 10, or something like that.

    if it reduced the total number of jobs because it was a tool that made people more effective - did it take the job away?

  • talEnglish
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago
    link
    fedilink

    In the long run, sure.

    In the near term? No, not by a long shot.

    There are some tasks we can automate, and that will happen. That’s been a very long-running trend, though; it’s nothing new. People generally don’t write machine language by physically flipping switches these days; many decades of automation have happened since then.

    I also don’t think that a slightly-tweaked latent diffusion model, of the present “generative AI form, will get all that far, either. The fundamental problem: taking an incomplete specification in human language and translating it to a precise set of rules in machine language making use of knowledge of the real world, isn’t something that I expect you can do very effectively by training on a existing corpus.

    The existing generative AIs work well on tasks where you have a large training corpus that maps from something like human language to an image. The resulting image don’t have a lot by way of hard constraints on their precision; you can illustrate that by generating a batch of ten images for a given prompt that might all look different, but a fair number look decent-enough.

    I think that some of that is because humans typically process images and language in a way that is pretty permissive of errors; we rely heavily on context and our past knowledge about the real world to obtain meaning up with the correct meaning. An image just needs to “cue” our memories and understanding of the world. We can see images that are distorted or stylized, or see pixel art, and recognize it for what it is.

    Butthat’s not what a CPU does. Machine language is not very tolerant of errors.

    So I’d expect a generative AI to be decent at putting out content intended to be consumed by humans – and we have, in fact, had a number of impressive examples of that working. But I’d expect it to be less-good at putting out content intended to be consumed by a CPU.

    I think that that lack of tolerance for error, plus the need to pull in information from the real world, is going to make translating human language to machine language less of a good match than translating human language to human language or human language to human-consumable image.

  • keyEnglish
    arrow-up
    2
    arrow-down
    3
    ·
    2 months ago
    link
    fedilink

    “software developer says ai will not replace software developers” feels very John Henry

    • Eager EagleEnglish
      arrow-up
      4
      arrow-down
      0
      ·
      2 months ago
      link
      fedilink

      tbh that is vastly more reliable than “seller of hardware used to train AI models says AI will replace developers”