It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.
The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.
Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.



The difference that I’ve seen is that the internet was a development of communication technology which has been in clear demand since at least the 1800s. Chatbots have been around for the last few decades and have been treated as novelties by consumers for brief periods intermittently throughout my life. LLMs are the most sophisticated chatbots ever designed and are better than ever at imitating Austin Powers, but is that something we can expect will ever revolutionize the economy? Can we replace the labor force with a technology which can’t do work but can convince the most credulous people that it can?
LLMs are a tool. You and I use tools. They are not a replacement for humans, and rich CEOs that say otherwise are greedy fucking morons.
It’s also untrue that it “can’t do work”. I literally just had several conversations with LLMs at work today to work through some programming tasks and troubleshooting issues. They can pour through details, logs, search results, code way faster that I can. I would be working a helluva lot slower if I didn’t have LLMs running tasks in the background while I go do other things, or review code it wrote, or talk through other support issues. I’ve been doing this shit for 20+ years, and I’ve never seen a technological leap this significant since the Internet.
Don’t use blockchain, crypto, metaverse, or “VR goggles” as comparison points. This is not something that is going to just magically go away.
Thanks for specifying a legitimate use-case for this tool. I understand that google search has been the most valuable programming tool for a very long time so it makes sense LLMs would be more helpful in the same kind of way. Search engine technology is quite a bit different than blockchain or VR in terms of consumer and business demand.
For my purposes of news and history research, the unreliability of LLMs making me have to check all its claims every single time negates its usefulness as an assistant because I will have to examine its references anyway so it’s more time effective for me to skip the questionable output I would get and do the research myself in the first place. How have you been able to manage the issue of unreliability with the volumes of data you’re dealing with? Is the kind of data which you’re dealing with less likely to be unreliable since it is of a kind the LLM is more likely to process correctly?
The same way for any other information resource like Wikipedia or some random Reddit post: trust but verify. Always review the code, point out mistakes, call out potential edge cases. Especially with newer thinking models, the hallucinations are minimal. It’s mostly just miscommunication in the request, which you can detect in the Thinking stream, stop, and re-correct. Rubberducking makes you better at communicating ideas in general, and providing enough context for the request is everything.
A lot of it has to do with the type of model you’re using, too, and having a decent global rules file tailored to how you want it to respond. If you don’t like how the model is responding, try out another one. If it’s some repeat mistake it makes, put it in a global rules file, or ask it to make a permanent memory.
Claude Opus does well at work, but is rather expensive for home use. I use Kimi reasoning models in Kagi for searching questions, and Qwen/GLM hybrid models for local use. It takes a bit of setup and tweaking to get the local stuff working, but LLMs are good at knowing how their own models work, so I just had Kimi help me out with some of the harder troubleshooting.
I can tell you are experienced with Rubberducking. Thanks for the detailed answer.