I appreciate Simon’s balanced take on how LLMs can enhance a project when used responsibly.

I’m curious, though—what are this community’s opinions on the use of LLMs in programming?

  • Solemarc@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    It’s funny, to me I’ve had an llm give me the wrong answer to questions every time.

    The first time I couldn’t remember how to read a file as a string in python and it got me most of the way there. But I trusted the answer thinking “yeah, that looks right” but it was wrong, I just got the io class I didn’t call the read() function.

    The other time it was an out of date answer. I asked it how to do a thing in bevy and it gave me an answer that was deprecated. I can sort of understand that though, bevy is new and not amazingly documented.

    On a different note, my senior who is all PHP, no python, no bash, has used LLM’s to help him write python and bash. It’s not the best code, I’ve had to do optimisations on his bash code to make it run on CI without taking 25 minutes, but it’s definitely been useful to him with python and bash, he was hired as a PHP dev.

    • Bio bronk@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      7
      ·
      2 days ago

      Your problem is you don’t understand how llms work. You treat it like a magic genie when its not. Treat it right and you can fly. I integrated a new messaging architecture into my stack the other day that would of taken me weeks before. But I isolated my problem set and targeted what I needed to target. But I also understand what to tell it and how to utilize it as a tool.

      In your case, its trivial to just check the methods of the class or know that its a call you’re accessing in the first place. That AI can’t read your mind if you don’t frame the problem correctly.