codeinabox@programming.dev to Programming@programming.devEnglish · 2 days agoTurn off Cursor, turn on your mindallvpv.orgexternal-linkmessage-square18fedilinkarrow-up164arrow-down16
arrow-up158arrow-down1external-linkTurn off Cursor, turn on your mindallvpv.orgcodeinabox@programming.dev to Programming@programming.devEnglish · 2 days agomessage-square18fedilink
minus-squareMichal@programming.devlinkfedilinkarrow-up7·17 hours agoYou still need a software engineer to review the code. It’s naive to think that randomly generated code will work, and by “work” i mean not just do what it’s supposed to, but also handle edge cases and be secure.
minus-squareonlinepersona@programming.devlinkfedilinkarrow-up1arrow-down10·15 hours agoIf you think it’s random, you don’t understand LLMs.
minus-squareMichal@programming.devlinkfedilinkarrow-up6arrow-down2·14 hours agoSo, in your learned opinion, it’s deterministic?
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·3 hours agoYou sent me down a bit of a rabbit hole, but it turned up an interesting answer. Turns out they are nondeterministic, and why they aren’t deterministic is still an open question https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
minus-squareMichal@programming.devlinkfedilinkarrow-up1·60 minutes agoInteresting, I had assumed that turning down temperature to 0, or hardcoding a seed would make LLM inference deterministic. Especial after watching this video https://youtu.be/J9ZKxsPpRFk
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·8 minutes agoI had thought I had seen both as well.
minus-squarethinkercharmercoderfarmer@slrpnk.netlinkfedilinkarrow-up1·3 hours agoSkipped over the opening graphic on first read but just read it. Could they have picked a creepier sample sentence.
minus-squareonlinepersona@programming.devlinkfedilinkarrow-up1arrow-down4·7 hours agoAnd you are perfectly deterministic? Because if you aren’t, by your own dichotomic logic, you’re random too.
minus-squareMichal@programming.devlinkfedilinkarrow-up1·1 hour agoSo you say its not random, and now you do a 180 and say that randomness is a good thing? I should have known you are a troll
You still need a software engineer to review the code. It’s naive to think that randomly generated code will work, and by “work” i mean not just do what it’s supposed to, but also handle edge cases and be secure.
If you think it’s random, you don’t understand LLMs.
So, in your learned opinion, it’s deterministic?
You sent me down a bit of a rabbit hole, but it turned up an interesting answer. Turns out they are nondeterministic, and why they aren’t deterministic is still an open question https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
Interesting, I had assumed that turning down temperature to 0, or hardcoding a seed would make LLM inference deterministic.
Especial after watching this video https://youtu.be/J9ZKxsPpRFk
I had thought I had seen both as well.
Skipped over the opening graphic on first read but just read it. Could they have picked a creepier sample sentence.
And you are perfectly deterministic? Because if you aren’t, by your own dichotomic logic, you’re random too.
So you say its not random, and now you do a 180 and say that randomness is a good thing?
I should have known you are a troll