

Well, only when they are forced to. The rest of the time, it’s all “here my social security number and credit card and all of my PII”.


Well, only when they are forced to. The rest of the time, it’s all “here my social security number and credit card and all of my PII”.


Solid journalism from… *reads notes* Yahoo Finance.


No, no website does it. There is no such thing as 100% uptime. If it happens, great, but I can guarantee you that no website even aims for 5 nines of uptime.
Google is the benchmark for website availability and in 2022 they had an outage that lasted an hour, meaning they didn’t meet 4 nines for the year.
In 2022. In the other years, they had 100% uptime.
Also, yes, there are plenty of clients that ask for five-nines. Is it realistic? Probably not. But, they definitely ask.
If you miss your SLO target for the year, then you missed your SLO target. If you’re down for 60 minutes but fine for the other 11 months, 29 days and 23 hours, you still missed your yearly SLO.
I understand how SLO targets work. If somebody is asking for a five-nines as an SLO, they are basically asking for 100% uptime, because there is no such thing as a “five minute outage”, especially not one that is fixable without total automation.
Again, a human hasn’t even gotten paged and out of bed in 5 minutes time.


No, that’s infinite nines, which isn’t possible.
It’s not impossible. Large reliable websites do it all the time. It’s call 100% uptime.
Sure, it’s measured per year, and sometimes they have some outage that breaks the record. But, it is possible to have 100% uptime throughout the year.


Just shout “lalalalala” until it goes away. Great strategy, people!


Five nines means that you need people at their desks in shifts ready to start fixing something the moment there’s a problem
No, it means you don’t have outages. Ever.
Five-nines is something like 7 minutes of downtime throughout the entire year. At best, you might have automated failover systems that require tiny outages. No human involvement, though, unless you’re deal with some major breakage that would have killed the five-nines commitment that year, anyway.
It’s takes a human something like 5-10 minutes just to get out of bed and figure out the situation, anyway.


How have you been able to manage the issue of unreliability with the volumes of data you’re dealing with? Is the kind of data which you’re dealing with less likely to be unreliable since it is of a kind the LLM is more likely to process correctly?
The same way for any other information resource like Wikipedia or some random Reddit post: trust but verify. Always review the code, point out mistakes, call out potential edge cases. Especially with newer thinking models, the hallucinations are minimal. It’s mostly just miscommunication in the request, which you can detect in the Thinking stream, stop, and re-correct. Rubberducking makes you better at communicating ideas in general, and providing enough context for the request is everything.
A lot of it has to do with the type of model you’re using, too, and having a decent global rules file tailored to how you want it to respond. If you don’t like how the model is responding, try out another one. If it’s some repeat mistake it makes, put it in a global rules file, or ask it to make a permanent memory.
Claude Opus does well at work, but is rather expensive for home use. I use Kimi reasoning models in Kagi for searching questions, and Qwen/GLM hybrid models for local use. It takes a bit of setup and tweaking to get the local stuff working, but LLMs are good at knowing how their own models work, so I just had Kimi help me out with some of the harder troubleshooting.


Holy shit… I finally found one of the screenshots for these loaders:

You could load up a disk full of games and tie it to a boot loader menu like this.


I mean, that’s how we ultimately got them. We must have had most of the popular ATARI XL games in two wooden floppy boxes.
But, you gotta respect the networked distribution even back then. Pirates would create their disk packs, upload it to some national BBS. It gets picked up by more local BBSs, and tech-saavy modem users would download it to floppies. All the while sneakernet would carry it down the last mile to fill in the gaps. Some of this shit even went international, as long as somebody dealt with the long-distance fees (or phreaked their way out of them).
EDIT: Just to give you an idea of the network we were dealing with.


There was a pirate scene even in the 80s, during the 8-bit computer era. Transferring games to floppy from a 300 baud modem.
Parents had a good friend of theirs that gave us a ton of games every time he visited. Most of them were game selection startup menus, because the uploaders wanted to use up all of the space on the floppy, so they crammed it up with 6-8 games each. You can still find these disk copies on certain C64/ATARI XL game torrents.
All the while SPA was still pushing anti-piracy commercials on PBS channels. “Don’t copy that floppy” was always their silly tagline.
And yea, once Napster turned into a household name, piracy was mainstream.


but is that something we can expect will ever revolutionize the economy? Can we replace the labor force with a technology which can’t do work but can convince the most credulous people that it can?
LLMs are a tool. You and I use tools. They are not a replacement for humans, and rich CEOs that say otherwise are greedy fucking morons.
It’s also untrue that it “can’t do work”. I literally just had several conversations with LLMs at work today to work through some programming tasks and troubleshooting issues. They can pour through details, logs, search results, code way faster that I can. I would be working a helluva lot slower if I didn’t have LLMs running tasks in the background while I go do other things, or review code it wrote, or talk through other support issues. I’ve been doing this shit for 20+ years, and I’ve never seen a technological leap this significant since the Internet.
Don’t use blockchain, crypto, metaverse, or “VR goggles” as comparison points. This is not something that is going to just magically go away.


This one just came out from Jamrock Hobo, one of the main Disco Elysium channels. I think he has been working on this for several months. To say that it explains everything is an understatement.
Oh, huh, this noclip one is recent, too. PMG’s second video did a really good breakdown of the ZA/UM drama. (A helluva lot better than their first attempt. PMG basically apologized for the tone and structure of that video.)


This is a technology community. LLMs are technology. If calling LLMs useful is considered glazing, then I’m not sure if you’ve eaten a proper doughnut.


LLMs are more like vr goggles with the force of the entire plutocracy pumping up the bubble.
The existence of a bubble doesn’t not mean the technology is useless. The internet had its own bubble 25 years ago. That doesn’t mean it was useless, just that people were investing in anything even remotely related to the Internet, including stupid websites and wasteful ideas.


A junior developer is fundamentally untrustworthy. That’s why you don’t give them access to the fucking prod database and backups.
AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.
We don’t know what the prompt and past input was. Maybe it wasn’t as “random” as you make it out to be. A company stupid enough to let LLMs touch their prod database is going to include a bunch of other stupid inputs.
You’re approaching this from the perspective of “all LLMs are bad so don’t use them”, which is its own version of unethical and immoral. A company that isn’t using LLMs is like a company not using the Internet.
LLMs are useful, everybody should use them to some capacity, and understanding a technology is far far better than spouting off ignorant bullshit like this.
Do yourself a favor: download a free model on HuggingFace, learn how they work, experiment with the technology on your own video card. It doesn’t have to be some super-powered video card. You can get models that fit in a 8GB card just fine.


This week, players are being asked to pay $25 for early access to Masters of Albion, a god game throwback that legendary designer Peter Molyneux (Populous, Dungeon Keeper, Black and White) says will be the last game he ever works on.
Also, Curiosity and Godus. Let’s not forgot about those “legendary” games.


Well, they have to hit the target on this one first.


Anna’s Archive should hire this same lawyer.


The NYT is now compromised. Has been for several years now.
This whole thread started with:
There’s no detecting and fixing something that fast. When you’re talking about less than 5 minute of outage time a year, it basically means you can’t have outages. Which is possible for some, but only for large reliable websites that have the resources to pull that off, and they still don’t always make the mark.
I’m not sure why that simple premise is disagreeable with the OP.