One of our coworkers keeps telling us to trust AI.
We’re like, we get to to generate some code but we always check it. The coworker was like, nah you should just trust it. We’re like, why? He said you should just train it until it gets to the state where you want.
We were like, we’re competent in our fields and we wouldn’t want to use anything in production that’s not checked by a human. Even if it wasn’t checked by a human we should have some checks for sure. Not sure why he’s so adamant about not checking anything.
And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.
That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.
Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.
One of our coworkers keeps telling us to trust AI.
We’re like, we get to to generate some code but we always check it. The coworker was like, nah you should just trust it. We’re like, why? He said you should just train it until it gets to the state where you want.
We were like, we’re competent in our fields and we wouldn’t want to use anything in production that’s not checked by a human. Even if it wasn’t checked by a human we should have some checks for sure. Not sure why he’s so adamant about not checking anything.
Your co-worker is bad at his job, and doesn’t understand programming.
LLMs are cool tech, but I’m gonna code review everything, whether it comes from a human or not.
That plays into the pattern I’ve been seeing.
The “AI prompting experts” are useless as they don’t understand fundamentals
And doesn’t understand LLMs, which don’t “learn” a damn thing after the training is completed. The only variation after that is random numbers and the input it receives.
That’s not true. There are other ways of influencing the numbers that tools use. Most of them have their own internal voting systems, so that humans can give feedback to directly influence the LLM.
Diffusion models have LoRAs, and the models themselves can be trained on top of the base model.