Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 days ago

    Your co-worker is bad at his job, and doesn’t understand programming.

    LLMs are cool tech, but I’m gonna code review everything, whether it comes from a human or not.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      8 days ago

      That plays into the pattern I’ve been seeing.

      The “AI prompting experts” are useless as they don’t understand fundamentals

    • chaos@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.

        Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.