• CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          You know, I’d be interested to know what the critical size you can get to with that approach is before it becomes useless.

          • ByteOnBikes@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            36 minutes ago

            It can become pretty bad quickly, with just a small project with only 15-20 files. I’ve been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.

            And while incredibly impressive how it’s creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It’ll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.

            Then you try to reel it in, and it continues to go rampant. And for me, that’s when I either take the wheel or roll back.

            I highly recommend every programmer watch it in action.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    51
    arrow-down
    4
    ·
    edit-2
    22 hours ago

    Watching the serious people trying to use AI to code gives me the same feeling as the cybertruck people exploring the limits of their car. XD

    “It’s terrible and I should hate it, but gosh it it isn’t just so cool”

    I wish i could get so excited over disappointing garbage

    • person420@lemmynsfw.com
      link
      fedilink
      arrow-up
      3
      ·
      1 hour ago

      You definitely could use AI to code, the catch is you need to know how to code first.

      I use AI to write code for mundane tasks all the time. I also review and integrate the code myself.

      • prettybunnys@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        The AI code my “expert in a related but otherwise not helpful field” coworker writes helps me have a lot of extra work to do!

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      16 hours ago

      And that’s what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it’s imitating, but with zero understanding of why the original looked that way.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        arrow-down
        8
        ·
        edit-2
        7 hours ago

        I mean, there’s about a billion ways it’s been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That’s how I know I and other humans have understanding, after all.

        What it’s not is aligned to care about anything other than making plausible-looking text.

        • Jtotheb@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          5 hours ago

          Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

          Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

          And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 hours ago

            Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

            You got the “originality” part there, right? I’m talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

            Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

            Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It’s true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    85
    arrow-down
    2
    ·
    1 day ago

    All programs can be written with on less line of code. All programs have at least one bug.

    By the logical consequences of these axioms every program can be reduced to one line of code - that doesn’t work.

    One day AI will get there.

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      21 hours ago

      All programs can be written with on less line of code. All programs have at least one bug.

      The humble “Hello world” would like a word.

      • Amberskin@europe.pub
        link
        fedilink
        arrow-up
        20
        ·
        16 hours ago

        Just to boast my old timer credentials.

        There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

        It has just one assembly code instruction: a BR 14, which means basically ‘return’.

        The first version was bugged and IBM had to issue a PTF (patch) to fix it.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          9 hours ago

          Okay, you can’t just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

          • Amberskin@europe.pub
            link
            fedilink
            arrow-up
            2
            ·
            6 hours ago

            It didn’t clear the return code. In mainframe jobs, successful executions are expected to return zero (in the machine R15 register).

            So in this case fixing the bug required to add an instruction instead of removing one.

        • Rose@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          7 hours ago

          Reminds me of how in some old Unix system, /bin/true was a shell script.

          …well, if it needs to just be a program that returns 0, that’s a reasonable thing to do. An empty shell script returns 0.

          Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. …never mind that this was a program that literally does nothing.

      • phx@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        16 hours ago

        You can fit an awful lot of Perl into one line too if you minimize it. It’ll be completely unreadable to most anyone, but it’ll run

  • LanguageIsCool@lemmy.world
    link
    fedilink
    arrow-up
    46
    ·
    1 day ago

    I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we’ll get there!

      …but it won’t be that impressive once we remember concepts like “monkey, typing, Shakespeare” were already embedded in the training data.

  • 1984@lemmy.today
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    1 day ago

    Its like having a junior developer with a world of confidence just change shit and spend hours breaking things and trying to fix them, while we pay big tech for the privilege of watching the chaos.

    I asked chat gpt to give me a simple squid proxy config today that blocks everything except https. It confidently gave me one but of course it didnt work. It let through http and despite many attempts to get a working config that did that, it just failed.

    So yeah in the end i have to learn squid syntax anyway, which i guess is fine, but I spent hours trying to get a working config because we pay for chat gpt to do exactly that…

    • Nalivai@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      I have a friend who swears by llms, he sais it helps him a lot. I once watched him do it, and the experience was exactly the same you described. He wasted couple of hours fighting with bullshit generator just to do everything himself anyway. I asked him wouldn’t it be better to not waste the time, but he didn’t really saw the problem, he gaslit himself that fighting with the idiot machine helped.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      21
      ·
      1 day ago

      It confidently gave me one

      IMO, that’s one of the biggest “sins” of the current LLMs, they’re trained to generate words that make them sound confident.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        9
        ·
        23 hours ago

        They aren’t explicitly trained to sound confident, that’s just how users tend to talk. You don’t often see “I don’t know but you can give this a shot” on Stack Overflow, for instance. Even the incorrect answers coming from users are presented confidently.

        Funnily enough, lack of confidence in response is something I don’t think LLMs are currently capable of, since it would require contextual understanding of both the question, and the answer being given.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          20 hours ago

          No, I’m sure you’re wrong. There’s a certain cheerful confidence that you get from every LLM response. It’s this upbeat “can do attitude” brimming with confidence mixed with subservience that is definitely not the standard way people communicate on the Internet, let alone Stack Overflow. Sure, sometimes people answering questions are overconfident, but it’s often an arrogant kind of confidence, not a subservient kind of confidence you get from LLMs.

          I don’t think an LLM can sound like it lacks in confidence for the right reasons, but it can definitely pull off lack of confidence if it’s prompted correctly. To actually lack confidence it would have to have an understanding of the situation. But, to imitate lack of confidence all it would need to do is draw on all the training data it has where the response to a question is one where someone lacks confidence.

          Similarly, it’s not like it actually has confidence normally. It’s just been trained / meta-prompted to emit an answer in a style that mimics confidence.

          • locuester@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            16 hours ago

            ChatGPT went through a phase of overly bubbly upbeat responses, they chilled it out tho. Not sure if that’s what you saw.

            One thing is for sure with all of them, they never say “I don’t know” because such responses aren’t likely to be found in any training data!

            It’s probably part of some system level prompt guidance too, like you say, to be confident.

            • merc@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              15 hours ago

              I think “I don’t know” might sometimes be found in the training data. But, I’m sure they optimize the meta-prompts so that it never shows up in a response to people. While it might be the “honest” answer a lot of the time, the makers of these LLMs seem to believe that people would prefer confident bullshit that’s wrong over “I don’t know”.

        • derpgon@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          22 hours ago

          SO answers and questions are usually edited multiple times to sound professional, confident, and be correct.

    • fibojoly@sh.itjust.works
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      1 day ago

      Man, I can’t wait to try out generative AI to generate config files for mission critical stuff! Imagine paying all of us devops wankers when my idiot boss can just ask Chat GPT to sort all this legacy mess we’re juggling with on the daily!

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    8
    ·
    1 day ago

    To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

    LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

    • iAvicenna@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like “in plotly how do I do this and that to the xaxis” etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

      Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

        • iAvicenna@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          58 minutes ago

          If you do it through AI you can still learn. After all I go through the code to understand what is going on. And for not so complex tasks LLMs are good at commenting the code (though it can bullshit from time to time so you have to approach it critically).

          But anyways the stuff I ask LLMs are generally just one off tasks. If I need to use something more frequently, I do prefer reading stuff for more in depth understanding.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        9
        ·
        edit-2
        1 day ago

        Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

        • Maalus@lemmy.world
          link
          fedilink
          arrow-up
          14
          arrow-down
          1
          ·
          1 day ago

          I recently tried it for scripting simple things in python for a game. Yaknow, change char’s color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

          It all based on “Misc.changeHue(player)”. A function that doesn’t exist and never has, because the game is unable to color other mobs / players like that for scripting.

          Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with “yes this can totally be done, this is how” and “how” doesn’t work, and after sifting forums / asking devs you find out “sadly that’s impossible” or “we dont actually use cpython so libraries don’t work like that” etc.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            11
            ·
            1 day ago

            Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.

            LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.

            • Maalus@lemmy.world
              link
              fedilink
              arrow-up
              13
              arrow-down
              1
              ·
              1 day ago

              No, a human would just find an API that is publically available. And the fact that it knew the static class “Misc” means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with “I want to color a player’s model in GAME using python and SCRIPTING ENGINE”.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            1 day ago

            Not sure what you mean, boilerplate code is one of the things AI is good at.

            Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.

            More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.

        • GreenMartian@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          9
          ·
          1 day ago

          They have been pretty good on popular technologies like python & web development.

          I tried to do Kotlin for Android, and they kept tripping over themselves; it’s hilarious and frustrating at the same time.

          • doktormerlin@feddit.org
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            1 day ago

            I use ChatGPT for Go programming all the time and it rarely has problems, I think Go is more niche than Kotlin

            • Opisek@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              I get a bit frustrated at it trying to replicate everyone else’s code in my code base. Once my project became large enough, I felt it necessary to implement my own error handling instead of go’s standard, which was not sufficient for me anymore. Copilot will respect that for a while, until I switch to a different file. At that point it will try to force standard go errors everywhere.

              • doktormerlin@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                15 hours ago

                Yes, you can’t use Copilot to generate files in your code structure way if you start from scratch. I usually start by coding a skaffold and then use Copilot to complete the rest, which works quite good most of the time. Another possibility is to create comment templates that will give instructions to Copilot. So every new Go file starts with coding structure comments and Copilot will respect that. Junior Devs might also respect that, but I am not so sure about them

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          1 day ago

          I tried, it can’t get trough four lines without messing up. Unless I give it tasks that ate so stupendously simple that I’m faster typing them myself while watching tv

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          1 day ago

          Play ASCII tic tac toe against 4o a few times. A model that can’t even draw a tic tac toe game consistently shouldn’t write production code.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      17 hours ago

      Practically all LLMs aren’t good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn’t trust her writing production code 🤣

      Once a single model (doesn’t have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

      Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn’t compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

      I don’t say that AI (especially AGI) can’t replace humans. It definitely can and will, it’s just a matter of time, but state of the Art LLMs are basically just extremely good “search engines” or interactive versions of “stack overflow” but not good enough to do real “thinking tasks”.

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        extremely good “search engines” or interactive versions of “stack overflow”

        Which is such a decent use of them! I’ve used it on my own hardware a few times just to say “Hey give me a comparison of these things”, or “How would I write a function that does this?” Or “Please explain this more simply…more simply…more simply…”

        I see it as a search engine that connects nodes of concepts together, basically.

        And it’s great for that. And it’s impressive!

        But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of “It’ll replace all human coders” fever dreams.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        8
        ·
        1 day ago

        Cherry picking the things it doesn’t do well is fine, but you shouldn’t ignore the fact that it DOES do some things easily also.

        Like all tools, use them for what they’re good at.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          5
          ·
          1 day ago

          I don’t think it’s cherry picking. Why would I trust a tool with way more complex logic, when it can’t even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don’t have the capability to plan and think when they lose tic tac toe games.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            12
            ·
            1 day ago

            Why would I trust a drill press when it can’t even cut a board in half?

            • wischi@programming.dev
              link
              fedilink
              arrow-up
              14
              arrow-down
              2
              ·
              edit-2
              1 day ago

              A drill press (or the inventors) don’t claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣

              LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

              If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 day ago

                Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                • wischi@programming.dev
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 day ago

                  Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                4
                arrow-down
                2
                ·
                1 day ago

                I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.

                Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                • Zexks@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  20 hours ago

                  “I’m not again LLMs I just never say anything useful about them and constantly point out how I can’t use them.” The other guy is right and you just prove his point.

    • Opisek@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      Perhaps 5 LOC. Maybe 3. And even then I’ll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        My guess is what’s going on is there’s tons of psuedo code out there that looks like it’s a real language but has functions that don’t exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don’t realize things but just pattern match very complex patterns).

  • haui@lemmy.giftedmc.com
    link
    fedilink
    arrow-up
    75
    arrow-down
    2
    ·
    1 day ago

    Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.

  • coherent_domain@infosec.pub
    link
    fedilink
    English
    arrow-up
    146
    ·
    edit-2
    1 day ago

    The image is taken from Zhihu, a Chinese Quora-like site.

    The prompt is talking about give a design of a certain app, and the response seems to talk about some suggested pages. So it doesn’t seem to reflect the text.

    But this in general aligns with my experience coding with llm. I was trying to upgrade my eslint from 8 to 9, and ask chatgpt to convert my eslint file, and it proceed to spit out complete garbage.

    I thought this would be a good task for llm because eslint config is very common and well-documented, and the transformation is very mechanical, but it just cannot do it. So I proceed to read the documents and finished the migration in a couple hour…

    • Lucy :3@feddit.org
      link
      fedilink
      arrow-up
      72
      arrow-down
      1
      ·
      2 days ago

      I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of “use printf for output” -> “there’s no printf with bare metal as target” -> “use solution X” -> “doesn’t work” -> “ude printf for output” … I had enough.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        1 day ago

        I used ChatGPT to help me make a package with SUSE’s Open Build Service. It was actually quite good. Was pulling my hair out for a while until I noticed that the project I wanted to build had changes URLs and I was using an outdated one.

        In the end I just had to get one last detail right. And then my ChatGPT 4 allowance dried up and they dropped me back down to 3 and it couldn’t do anything. So I had to use my own brain, ugh.

        • noctivius@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          chatgpt is worse among biggest chatbots with writing codes. From my experience Deepseek > Perplexity > Gemini > Claude.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Yeah you can tell it just ratholes on trying to force one concept to work rather than realizing it’s not the correct concept to begin with

        • formulaBonk@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          That’s exactly what most junior devs do when stuck. They rehash the same solution over and over and it almost seems like that llms trained on code bases infer that behavior from commit histories etc.

          It almost feels like on of those “we taught him these tasks incorrectly as a joke” scenarios

      • qqq@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 day ago

        QEMU makes it pretty painless to hook up gdb just FYI; you should look into that. I think you can also have it provide a memory mapped UART for I/O which you can use with newlib to get printf debugging

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      23
      arrow-down
      4
      ·
      2 days ago

      It’s pretty random in terms of what is or isn’t doable.

      For me it’s a big performance booster because I genuinely suck at coding and don’t do too much complex stuff. As a “clean up my syntax” and a “what am I missing here” tool it helps, or at least helps in figuring out what I’m doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn’t do without one (or at least without getting berated by some online dick who doesn’t think he has time to give you an answer but sure has time to set you on a path towards self-discovery).

      How much of a benefit it’s for a professional I couldn’t tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?

      I don’t think it’s useless, but if you ask it to do something by itself you can’t trust that it’ll work without singificant additional effort.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          7
          ·
          1 day ago

          Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the “vibe coding” name we moved to the assumption that all AI assistance is vibe coding, but in practice there’s the percentage of what you do that you know how to do, the percentage you vibe code because you can’t figure it out yourself off the top of your head and the percentage you just can’t do without researching because the LLM can’t do it effectively or the stuff it can do is too crappy to use as part of something else.

          I think if the assumption is you should just “git gud” and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there’s a whole bunch of skill ranges where that is actually helpful for some stuff.

          If you want to deny that there’s a difference between that and just making code soup by asking a language model to build you entire pieces of software… well, then you’re going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you’re wrong and use that to pretend you’re wrong about the whole thing.

          This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that’s a bad outcome.

      • wise_pancake@lemmy.ca
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        1 day ago

        It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.

        It’s very useful for throwaway work like writing scripts and automations.

        It’s useful not but a 10x multiplier like all the CEOs claim it is.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          Fully agreed. Everybody is betting it’ll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn’t any guarantee that it’ll get to where the corpos are assuming it already is.

          Which is not the same as not having better autocomplete/spellcheck/“hey, how do I format this specific thing” tools.

          • wise_pancake@lemmy.ca
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            1 day ago

            Yeah, it’s still super useful.

            I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.

            I get a bit more done faster, that’s a solid value proposition.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        It’s not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn’t have the memory of a goldfish.

          And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I’m less concerned about hating the entire concept than I am about making sure they don’t figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.

          We’ve been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they’ll enshittify the freely available alternatives they control and clamp down as much as possible.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      I use it sometimes, usually just to create boilerplate. Actual functionality it’s hit or miss, and often it ends up taking more time to fix than to write myself.

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      I wouldn’t say it’s accurate that this was a “mechanical” upgrade, having done it a few times. They even have a migration tool which you’d think could fully do the upgrade but out of the probably 4-5 projects I’ve upgraded, the migration tool always produced a config that errored and needed several obscure manual changes to get working. All that to say it seems like a particularly bad candidate for llms

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        No, still “perfect” for llms. There’s nuance, seeing patterns being used, it should be able to handle it perfectly. Enough people on stack overflow asked enough questions, if AI is like Google and Microsoft claim it is, it should have handled it

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          I searched this issue and didn’t find anything very helpful. The new config format can be done many slightly different ways and there are a lot of variables in how your plugins and presets can be. It made perfect sense to me that the LLM couldn’t do this upgrade for op. Since one tiny mistake and it won’t work at all and usually gives a weird error.

      • coherent_domain@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        1 day ago

        Then I am quite confused what LLM is supposed to help me with. I am not a programmer, and I am certainly not a TypeScript programmer. This is why I postponed my eslint upgrade for half a year, since I don’t have a lot of experience in TypeScript, besides one project in my college webdev class.

        So if I can sit down for a couple hour to port my rather simple eslint config, which arguably is the most mechanical task I have seen in my limited programming experience, and LLM produce anything close to correct. Then I am rather confused what “real programmers” would use it for…

        People here say boilerplate code, but honestly I don’t quite recall the last time I need to write a lot of boilerplate code.

        I have also tried to use llm to debug SELinux and docker container on my homelab; unfortunately, it is absolutely useless in that as well.

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 day ago

          With all due respect, how can you weigh in on programming so confidently when you admit to not being a programmer?

          People tend to despise or evangelize LLMs. To me, github copilot has a decent amount of utility. I only use the auto-complete feature which does things like save me from typing 2-5 predictable lines of code that devs tend to type all the time. Instead of typing it all, I press tab. It’s just a time saver. I have never used it like “write me a script or a function that does x” like some people do. I am not interested in that as it seems like a sad crutch that I’d need to customize so much anyway that I may as well skip that step.

          Having said that, I’m noticing the copilot autocomplete seems to be getting worst over time. I’m not sure why it worsening, but if it ever feels not worth it anymore I’ll drop it, no harm no foul. The binary thinkers tend to think you’re either a good dev who despises all forms of AI or you’re an idiot who tries to have a robot write all your code for you. As a dev for the past 20 years, I see no reason to choose between those two opposites. It can be useful in some contexts.

          PS. did you try the eslint 8 -> 9 migration tool? If your config was simple enough for it, it likely would’ve done all or almost all the work for you… It fully didn’t work for me. I had to resolve several errors, because I tend to add several custom plugins, presets, and rules that differ across projects.

          • coherent_domain@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            Sorry, the language my original post might seem confrontational, but that is not my intension; I m trying to find value in LLM, since people are excited for it.

            I am not a professional programmer nor do I program any industrial sized project at the moment. I am a computer scientist, and my current research project do not involve much programming. But I do teach programming to undergrad and master students, so I want to understand what is a good usecase for this technology, and when can I expect it to be helpful.

            Indeed, I am frustrated by this technology, and that might shifted my language further than I intended to. When everyone is promoting this as a magically helpful tool for CS and math, yet I fail to see any good applications for either in my work, despite going back to it every couple month or so.


            I did try @eslint/migrate-config, unfortunately it added a good amount of bloat and ends up not working.

            So I just gived up and read the doc.

            • TrickDacy@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              Gotcha. No worries. I figured you were coming in good faith but wasn’t certain. Who is pushing llm’s for programming that hard? In my bubble, which often includes Lemmy, most people HATE them for all uses. I get that tech bros and linked in crazies probably push this tech for coding a lot, but outside of that, most devs I know IRL either are lukewarm or dislike llm’s for dev work.

    • Saleh@feddit.org
      link
      fedilink
      arrow-up
      50
      arrow-down
      1
      ·
      2 days ago

      My uncle. Very smart very neuronal. He knows the entire Internet, can you imagine? the entire internet. Like the mails of Crooked Hillary Clinton, that crook. You know what stands in that Mails? my uncle knows. He makes the best code. The most beautiful code. No one has ever seen code like it, but for him, he’s a genius, like i am, i have inherited all his genius genes. It is very easy. He makes the best code. Sometimes he calls me and asks me: you are even smarter than i am. Can you look at my code?

  • sturger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    Honest question: I haven’t used AI much. Are there any AIs or IDEs that can reliably rename a variable across all instances in a medium sized Python project? I don’t mean easy stuff that an editor can do (e.g. rename QQQ in all instances and get lucky that there are no conflicts). I mean be able to differentiate between local and/or library variables so it doesn’t change them, only the correct versions.

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      24 hours ago

      I’m going to laugh in Java, where this has always been possible and reliable. Not like ai reliable, but expert reliable. Because of static types.

    • derpgon@programming.dev
      link
      fedilink
      arrow-up
      10
      ·
      22 hours ago

      IntelliJ IDEA, if it knows it is the same variable, it will rename it. Usually works in a non fucked up codebase that uses eval or some obscure constructs like saving a variable name into a variable as a string and dynamically invoking it.

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      23
      ·
      1 day ago

      Not reliably, no. Python is too dynamic to do that kind of thing without solving general program equivalence which is undecidable.

      Use a static language, problem solved.

      • sturger@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Yeah, I’m looking for something that would understand the operation (? insert correct term here) of the language well enough to rename intelligently.

    • lapping6596@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      I use pycharm for this and in general it does a great job. At work we’ve got some massive repos and it’ll handle it fine.

      The “find” tab shows where it’ll make changes and you can click “don’t change anything in this directory”

      • setVeryLoud(true);@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        23 hours ago

        Yes, all of JetBrains’ tools handle project-wide renames practically perfectly, even in weirder things like Angular projects where templates may reference variables.

    • killabeezio@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Itellij is actually pretty good at this. Besides that, cursor or windsurf should be able to. I was using cursor for a while and when I needed to reactor something, it was pretty good at picking that up. It kept crashing on me though, so I am now trying windsurf and some other options. I am missing the auto complete features in cursor though as I would use this all the time to fill out boilerplate stuff as I write.

      The one key difference in cursor and windsurf when compared to other products is that it will look at the entire context again for any changes or at least a little bit of it. You make a change, it looks if it needs to make changes elsewhere.

      I still don’t trust AI to do much though, but it’s an excellent helper

    • LeroyJenkins@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      most IDEs are pretty decent at it if you configure them correctly. I used intelliJ and it knows the difference. use the refactor feature and it’ll crawl references, not just rename all instances.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Okay, I realize I’m that person, but for those interested:

      tree, cat and sed get the job done nicely.

      And… it’s my nap time, now. Please keep the Internet working, while I’m napping. I have grown fond of parts of it. Goodnight.

  • Drunk & Root@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    1 day ago

    cant wait to see “we use AI agents to generate well structured non-functioning code” with off centered everything and non working embeds on the website

  • TheReturnOfPEB@reddthat.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    I’m pretty sure that is how we got CORBA

    now just make it construct UML models and then abandon this and move onto version 2