I like watching it in action. I don’t know what the hell is going on, but it gives me a strange kind of peace, and if you stare at it long enough, you trick yourself into thinking that it makes some kind of sense.
I can tell a piece of software to do the maths for ms.
Sometimes the results appear to work with reality.
People complain about LLMs hallucinating, but they have no idea of how many assumptions and just plain “everybody does it this way, I guess it works” are there in scientific research.
I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.
The different domain though. LLM hallucinations may lead to catastrophe, while assuming infinite mass of an electron in absence of electromagnetic field is neat
I can do (some of) the maths, but I definitely can’t explain why any of it is like that, or how it works.
We all end up looking at cats in boxes pictures on the internet whenever we start to try to understand oh wow this cat is funny.
I like watching it in action. I don’t know what the hell is going on, but it gives me a strange kind of peace, and if you stare at it long enough, you trick yourself into thinking that it makes some kind of sense.
I can tell a piece of software to do the maths for ms. Sometimes the results appear to work with reality.
People complain about LLMs hallucinating, but they have no idea of how many assumptions and just plain “everybody does it this way, I guess it works” are there in scientific research.
It’s called the heuristic method and those doing it know the limitations. Whereas LLMs will just confidently put out garbage claiming it true.
Scientific calculations - and other approaches as well - put out garbage all the time, that is the main point of what I said above.
Some limitations are known, just like it is known that LLMs have the limitation of hallucinating.
I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.
The different domain though. LLM hallucinations may lead to catastrophe, while assuming infinite mass of an electron in absence of electromagnetic field is neat
Calculations will happily tell you that an acutely toxic drug is the best way to cure cancer.
The reason why that does not lead to catastrophe is that there are many checks and safety nets in place in order not to blindly trust any result.
The exact same approach can be applied to an LLM.
Which is ironic because everyone has, at least once, been asked “but how does it work?” and have answered “dunno, but it does”