• sobchak@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 hours ago

    Meh, disagree with a lot of this.

    AI tools can generate functional, adequate, perfectly average code

    Not in my experience.

    The outsourcing era taught us that the expensive part of software was never writing it. It was understanding it well enough to change it safely, to debug it under pressure, to explain to the next person why a particular decision was made at 2 a.m. on a Tuesday.

    Since AI is adequate, just have AI change, debug, and explain it. You don’t even need devs running the AI. Have AI generate intent. Just have AI scrape Twitter for people complaining about applications they wish existed, and have the AI make them. Let AI do market research. It’s supposedly perfectly adequate.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      44 minutes ago

      Its perfectly adequate generating simple scripts if you know what it’s doing or complex programs IF.you have a “harness” which is to say tones of well defined scopes, design docs, coding guidelines, and a dev and test environment with written and automatic unit and integration tests.

      Basically every devs wish lists. You get adequate complex coding results.

    • The Bard in Green@lemmy.starlightkel.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Just have AI scrape Twitter for people complaining about applications they wish existed, and have the AI make them.

      I mean… in 2026, this is probably a viable business strategy tbh.

  • luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    19
    ·
    12 hours ago

    AI tools can generate functional, adequate, perfectly average code at a speed and cost that would have been unimaginable even five years ago. And like the outsourcing wave of the early 2000s, the economics are real and rational. Nobody is wrong for using these tools. The code they produce is often fine. It works. It passes tests. It might ship as-is.

    Not the first time I’ve read this kind of statement and I always struggle to reconcile this with my personal experience. I’m seriously doubting that I’m just not a “good enough prompter”. I know how to explain context from domain to tech and vice versa, that’s like, a good 20% of my job. I’d say that AI tools are good at producing code that already exists.

    The LLMs are an interface to a corpus of written material. They’ve never had a thought, a chat around the coffee machine, or any experience in the largest sense of the world. This is a hard barrier on any induction they may emulate.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 hours ago

      You’re both correct, and also wrong.

      A lot of code already exists. Or at least in a close enough form that it can be easily adjusted to address a new situation.

      When someone comes up with an idea for a new App at this point, it’s almost never because it’s an entirely new branch of computing. It’s very likely just CRUD with a visual design, and then a small more complex algorithm to mix the data around behind the scenes.

      What’s the difference between a dating app and an automatic meal plan builder? The algorithm doesn’t care about whether or not the recipe swiped back when it matches it up to you.

      You’re right that they’re not going to be inventing entirely new things most of the time, that’s just not what’s needed of them most of the time.

      • luciole (they/them)@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        Fortunately software is much more than App ideas fishing for VC investments. A lot of us are building actual tools for nurses, teachers, technicians, artists, students, etc. We have to analyze these human beings’ role in society, their needs, their situation, which is different from merely preying on their attention span. Programming languages are still the most reliable way to specify how the software must behave. And once the software is done, it is merely born. It then lives through a steady flow of continuous adaptation until one day it dies as all things do. Downplaying the human condition is a mistake.

        • BlameThePeacock@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          A medicine dispenser application for a nurse is still just CRUD operations for the most part. There’s nothing innovative about how the code would be written in an application like that.

  • ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    14 hours ago

    We need the equivalent investment now. If average code is cheap, then the scarce resource is no longer the ability to produce it. The scarce resource is the ability to read it, to navigate it

    You know what would help a lot with understanding the code one is working on? Writing it yourself without turning your brain off via AI.

    But that’s an insight the article somehow seems to be missing.

    • nomad@infosec.pub
      link
      fedilink
      arrow-up
      9
      arrow-down
      26
      ·
      12 hours ago

      I always ask myself how many of these anti ai warriors are actually proficient professional coders. And I’m talking like engineer level, not hobby level.

      LLMs are a tool. Give a package power tool to a fool and the result is stupid at best, bloody at the worst. Let’s call that vibe tooling and ask if there is a difference to vibe coding.

      Imho there is not. LLMs are a tool that can lift up the quality of coding work to a common level if used by proficient people. It helps with searching through and understanding vast outputs as long as you know what to expect. Its a miracle in intuition.

      Its not a mind reading tool that will just code your fantasy software for you. Hate it all you like, AI is here to stay, this is like hating cars in the age of horses. Cars are not magic, neither is “AI”.

      • darkmarx@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        9 hours ago

        I have over 25 years of development experience. My current role is vice president of development and architecture where I lead a team of 80+ devs, QAs, and architects. By any measure, I am one of those “engineer level” developers you speak of.

        Yes, LLMs are a tool, but it’s a tool one should use sparingly. LLMs are pattern recognition machines and are great for routine, been-there-done-that type development. For anything that deviates from the norm, LLMs will try to force everything back into common patterns… even when those patterns are not correct. A well designed system can be mangled into junk because the LLM doesn’t have enough context or because something is new.

        Be skeptical of the rave reviews around coding agents and the use of LLMs for development. Much of the hype seems tied to developer skill. Less capable developers can use LLMs to appear more capable than they are. For good developers, LLMs seem to erode their skills as they rely on the tool instead of their own knowledge. I have seen this first hand.

        Overall, it seems LLMs raise skills of bad developers and hamper the skills of good developers. It’s creating a bunch of middling developers who are incapable of handling anything novel or complex.

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          Sounds good. Pretty sure you are correct on most points. Agentic coding is bullshit for sure. I’m mostly talking partner coding, code review and some data interpretation like screenshots of UI changes in a CI for example.

          • Nate Cox@programming.dev
            link
            fedilink
            English
            arrow-up
            12
            ·
            6 hours ago

            The goalpost escalation I constantly see in these threads is both hilarious and deeply frustrating.

            “You need to be a good dev to use these!” “I am a good dev and these tools suck.”

            “No like you need to be enterprise level good” “I am an enterprise level dev with credentials far exceeding the baseline offered.”

            “No but you need to have written code recently!!” “I was writing code yesterday.”

            I am now waiting for the obligatory “well your coworkers must just be fixing all your code you screw up” because the pro-ai crowd has no argument for the tech not based on “u suk”.

            • entwine@programming.dev
              link
              fedilink
              arrow-up
              5
              ·
              6 hours ago

              please review this Lemmy thread and come up with a good way to keep moving the goal posts so that I can feel like I’m right

              @onlinepersona prompting chatgpt right now

      • Dumhuvud@programming.dev
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        2
        ·
        11 hours ago

        LLMs are a tool that can lift up the quality of coding work

        Imagine telling on yourself like this.

        And that is right after implying that you are a “proficient professional coder” that is “like engineer level” unlike those pesky “anti ai warriors”. Jesus fucking Christ.

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          1
          arrow-down
          7
          ·
          9 hours ago

          I’ve been training my own employees for years. And I’m suggesting you get a degree before playing keyboard warrior on the internet. ;)

          it makes it easy for bad coders to mask as passable but good coders can still spot that in review.

          • Dumhuvud@programming.dev
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            7 hours ago

            My entire point was in one single sentence, and yet you managed to shit out three sentences, not even remotely addressing that.

            I’m saying that if the output puked out by an LLM is of better quality than your own code, something you literally just confessed to, then you’re nothing but a hack. An impostor.

            What does the fact that you’ve been training anyone have to do with that? What does a degree, or lack thereof, have to do with anything? I’ve seen plenty of hacks employed as “seniors”, some with a CompSci degree. The kind of hacks that used to be overly reliant on StackOverflow in the past. The kind of hacks that write poorly performing garbage, yet quote Knuth’s “premature optimization is the root of all evil” (completely missing the context) when you confront them about it.

            • nomad@infosec.pub
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              4 hours ago

              I’m not saying ai code is better than mine. But ai review sees quite a lot normal humans would overlook. Pair programming works with ai just as good. Generally agentic coding is shit. And I have nothing to prove nor get mad about. Somehow you can’t seem to bring up a sound argument but rage. X)

              I’m running a successful business with plenty of Devs trained and working for me doing all kinds of specialized real-time engineering. You shout on Lemmy.

      • Jiral@lemmy.org
        link
        fedilink
        arrow-up
        10
        ·
        11 hours ago

        That is right, it is a tool. But how useful will it be as a tool once it will be sold by token at real costs, where every mistake that tool makes costs money and we are talking here maybe about 10 times higher costs than people currently pay for Claude, at the minimum.

        Add to that the question how the use of LLMs affects the career pipeline from junior dev to senior dev.

        There not so many tool analogies where the tool is especially good at making things look good, even if they aren’t when you dig deeper.

        • ell1e@leminal.space
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          9 hours ago

          I also think there still hasn’t been a study showing consistent long term significant(!) productivity gain for coders. (Other than lines of code in total, but that alone is a poor measure.) The amount of new hidden bugs and other issues seem to outweigh most of the perceived gains.

          • Jiral@lemmy.org
            link
            fedilink
            arrow-up
            2
            ·
            8 hours ago

            The key question is if total costs along the pipeline, from requirements definition tdown to the final quality controlled fully denugged product can be reduced, at real LLM costs (not with the currently vastly subsidised costs).

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          9 hours ago

          Well i can’t disagree with that take. Skill still plays a role. You still can’t suggest people keep writinga and reviewing solely by hand. That ship has sailed.

      • klankin@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        I mean its more like self driving cars than cars themselves; it can work, but also steering wheels were created by the devs for a reason - even if most are too lazy to understand that reason.

        Like I’d agree hand coding in assembly is (mostly) useless these days, but honestly I feel like the efficiency problems ai is trying to solve were largely solved 50 years ago with compilers.

        (and like isnt digesting large outputs the entire point of being an engineering level dev? like if youre just there to pray to the software gods, you’d do much better as a CRUD script kiddie anyways)

      • TrickDacy@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 hours ago

        Even if AI were the miracle people like you suggest, you’re still destroying the environment. But also it’s not miraculous. Which you conflictingly say is and is not the case…

        • nomad@infosec.pub
          link
          fedilink
          arrow-up
          2
          arrow-down
          5
          ·
          9 hours ago

          German engineer with 20 years of experience. Its a big jump. Believe me. I’m not suggesting most people use this tool the right way nor that the industry is not without flaws but its like eating meat. I have no issue with it as long as it is ethically sourced.

          • ell1e@leminal.space
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            9 hours ago

            All the studies I’ve found so far seem to disagree, so why should we believe you?

            https://www.anthropic.com/research/AI-assistance-coding-skills (2026 study)

            We found that using AI assistance led to a statistically significant decrease in mastery.

            Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

            https://futurism.com/artificial-intelligence/new-findings-ai-coding-overhyped (2025 study)

            But those claims appear to be massively overblown, as The Register reports, with researchers finding that productivity gains are modest at best — and at worst, that AI can actually slow down human developers.

            • nomad@infosec.pub
              link
              fedilink
              arrow-up
              1
              ·
              4 hours ago

              That’s what I’m saying. Ai does not help with speed. Takes potentially even longer. It helps with concept and design quality and completeness. For coding its just fancy auto complete. Think how LLMs can be used to improve the process instead of replacing yourself. Apply your skill with a lever instead.

          • TrickDacy@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            9 hours ago

            Explain how this amount of electricity use could ever be “ethically sourced”. That’s not even much of a thing for meat, which at least provides nutrients. AI slop is everywhere and most of it is not helping anyone with anything.

              • TrickDacy@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                3 hours ago

                Unless you get 100% of your power from the solar panels which is doubtful, then you’re using solar power that could’ve gone to something actually necessary