• sbeak@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    Conversely, most software developers look at LLM-gen (LLMs are not proper artificial intelligence since they don’t understand what you’re feeding it) code and say “WTF were they even trying to do here?”

    Indeed, AI (not LLMs, mind you, but AI) does have its use cases. For instance, in science, there are many fields where mass processing of data by conventional means is unfeasible. And in programming, using AI to help detect bugs so that an experienced developer, knowing how to troubleshoot and with context of the project’s aims and scope, can fix the issue more easily.

    LLM-gen code is very fragile and filled with loads of bugs, not to mention how the code it writes does not credit the original authors, as it ignores licensing and attribution requirements of projects that were scraped for its data set. And half the time, people producing LLM-gen code do not understand what it has produced and does not bother to review it before trying to push it to a large project, leaving the burden of filtering it out for those maintaining the project (when that effort could be directed at adding new features, fixing bugs, or doing anything else really)