• 23 Posts
  • 858 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • How have you been able to manage the issue of unreliability with the volumes of data you’re dealing with? Is the kind of data which you’re dealing with less likely to be unreliable since it is of a kind the LLM is more likely to process correctly?

    The same way for any other information resource like Wikipedia or some random Reddit post: trust but verify. Always review the code, point out mistakes, call out potential edge cases. Especially with newer thinking models, the hallucinations are minimal. It’s mostly just miscommunication in the request, which you can detect in the Thinking stream, stop, and re-correct. Rubberducking makes you better at communicating ideas in general, and providing enough context for the request is everything.

    A lot of it has to do with the type of model you’re using, too, and having a decent global rules file tailored to how you want it to respond. If you don’t like how the model is responding, try out another one. If it’s some repeat mistake it makes, put it in a global rules file, or ask it to make a permanent memory.

    Claude Opus does well at work, but is rather expensive for home use. I use Kimi reasoning models in Kagi for searching questions, and Qwen/GLM hybrid models for local use. It takes a bit of setup and tweaking to get the local stuff working, but LLMs are good at knowing how their own models work, so I just had Kimi help me out with some of the harder troubleshooting.




  • There was a pirate scene even in the 80s, during the 8-bit computer era. Transferring games to floppy from a 300 baud modem.

    Parents had a good friend of theirs that gave us a ton of games every time he visited. Most of them were game selection startup menus, because the uploaders wanted to use up all of the space on the floppy, so they crammed it up with 6-8 games each. You can still find these disk copies on certain C64/ATARI XL game torrents.

    All the while SPA was still pushing anti-piracy commercials on PBS channels. “Don’t copy that floppy” was always their silly tagline.

    And yea, once Napster turned into a household name, piracy was mainstream.


  • but is that something we can expect will ever revolutionize the economy? Can we replace the labor force with a technology which can’t do work but can convince the most credulous people that it can?

    LLMs are a tool. You and I use tools. They are not a replacement for humans, and rich CEOs that say otherwise are greedy fucking morons.

    It’s also untrue that it “can’t do work”. I literally just had several conversations with LLMs at work today to work through some programming tasks and troubleshooting issues. They can pour through details, logs, search results, code way faster that I can. I would be working a helluva lot slower if I didn’t have LLMs running tasks in the background while I go do other things, or review code it wrote, or talk through other support issues. I’ve been doing this shit for 20+ years, and I’ve never seen a technological leap this significant since the Internet.

    Don’t use blockchain, crypto, metaverse, or “VR goggles” as comparison points. This is not something that is going to just magically go away.






  • A junior developer is fundamentally untrustworthy. That’s why you don’t give them access to the fucking prod database and backups.

    AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.

    We don’t know what the prompt and past input was. Maybe it wasn’t as “random” as you make it out to be. A company stupid enough to let LLMs touch their prod database is going to include a bunch of other stupid inputs.

    You’re approaching this from the perspective of “all LLMs are bad so don’t use them”, which is its own version of unethical and immoral. A company that isn’t using LLMs is like a company not using the Internet.

    LLMs are useful, everybody should use them to some capacity, and understanding a technology is far far better than spouting off ignorant bullshit like this.

    Do yourself a favor: download a free model on HuggingFace, learn how they work, experiment with the technology on your own video card. It doesn’t have to be some super-powered video card. You can get models that fit in a 8GB card just fine.