I have over 25 years of development experience. My current role is vice president of development and architecture where I lead a team of 80+ devs, QAs, and architects. By any measure, I am one of those “engineer level” developers you speak of.
Yes, LLMs are a tool, but it’s a tool one should use sparingly. LLMs are pattern recognition machines and are great for routine, been-there-done-that type development. For anything that deviates from the norm, LLMs will try to force everything back into common patterns… even when those patterns are not correct. A well designed system can be mangled into junk because the LLM doesn’t have enough context or because something is new.
Be skeptical of the rave reviews around coding agents and the use of LLMs for development. Much of the hype seems tied to developer skill. Less capable developers can use LLMs to appear more capable than they are. For good developers, LLMs seem to erode their skills as they rely on the tool instead of their own knowledge. I have seen this first hand.
Overall, it seems LLMs raise skills of bad developers and hamper the skills of good developers. It’s creating a bunch of middling developers who are incapable of handling anything novel or complex.
Sounds good. Pretty sure you are correct on most points. Agentic coding is bullshit for sure. I’m mostly talking partner coding, code review and some data interpretation like screenshots of UI changes in a CI for example.
The goalpost escalation I constantly see in these threads is both hilarious and deeply frustrating.
“You need to be a good dev to use these!”
“I am a good dev and these tools suck.”
“No like you need to be enterprise level good”
“I am an enterprise level dev with credentials far exceeding the baseline offered.”
“No but you need to have written code recently!!”
“I was writing code yesterday.”
I am now waiting for the obligatory “well your coworkers must just be fixing all your code you screw up” because the pro-ai crowd has no argument for the tech not based on “u suk”.
I have over 25 years of development experience. My current role is vice president of development and architecture where I lead a team of 80+ devs, QAs, and architects. By any measure, I am one of those “engineer level” developers you speak of.
Yes, LLMs are a tool, but it’s a tool one should use sparingly. LLMs are pattern recognition machines and are great for routine, been-there-done-that type development. For anything that deviates from the norm, LLMs will try to force everything back into common patterns… even when those patterns are not correct. A well designed system can be mangled into junk because the LLM doesn’t have enough context or because something is new.
Be skeptical of the rave reviews around coding agents and the use of LLMs for development. Much of the hype seems tied to developer skill. Less capable developers can use LLMs to appear more capable than they are. For good developers, LLMs seem to erode their skills as they rely on the tool instead of their own knowledge. I have seen this first hand.
Overall, it seems LLMs raise skills of bad developers and hamper the skills of good developers. It’s creating a bunch of middling developers who are incapable of handling anything novel or complex.
Sounds good. Pretty sure you are correct on most points. Agentic coding is bullshit for sure. I’m mostly talking partner coding, code review and some data interpretation like screenshots of UI changes in a CI for example.
Wen was the last time you actually wrote something production level yourself?
The goalpost escalation I constantly see in these threads is both hilarious and deeply frustrating.
“You need to be a good dev to use these!” “I am a good dev and these tools suck.”
“No like you need to be enterprise level good” “I am an enterprise level dev with credentials far exceeding the baseline offered.”
“No but you need to have written code recently!!” “I was writing code yesterday.”
I am now waiting for the obligatory “well your coworkers must just be fixing all your code you screw up” because the pro-ai crowd has no argument for the tech not based on “u suk”.
Yes. It’s “AI can never fail. It can only be failed.”
Friday.
@onlinepersona prompting chatgpt right now
It seems it was recent enough to spell common words correctly