That seems like they left debugging code enabled/accessible.
No, this is actually a completely different type of problem. LLMs also aren’t code, and they aren’t manually configured/set up/written by humans. In fact, we kind of don’t really know what’s going on internally when performing inference with an LLM.
The actual software side of it is more like a video player that “plays” the LLM.
That seems like they left debugging code enabled/accessible.
No, this is actually a completely different type of problem. LLMs also aren’t code, and they aren’t manually configured/set up/written by humans. In fact, we kind of don’t really know what’s going on internally when performing inference with an LLM.
The actual software side of it is more like a video player that “plays” the LLM.