

Holy shit… I finally found one of the screenshots for these loaders:

You could load up a disk full of games and tie it to a boot loader menu like this.


Holy shit… I finally found one of the screenshots for these loaders:

You could load up a disk full of games and tie it to a boot loader menu like this.


I mean, that’s how we ultimately got them. We must have had most of the popular ATARI XL games in two wooden floppy boxes.
But, you gotta respect the networked distribution even back then. Pirates would create their disk packs, upload it to some national BBS. It gets picked up by more local BBSs, and tech-saavy modem users would download it to floppies. All the while sneakernet would carry it down the last mile to fill in the gaps. Some of this shit even went international, as long as somebody dealt with the long-distance fees (or phreaked their way out of them).
EDIT: Just to give you an idea of the network we were dealing with.


There was a pirate scene even in the 80s, during the 8-bit computer era. Transferring games to floppy from a 300 baud modem.
Parents had a good friend of theirs that gave us a ton of games every time he visited. Most of them were game selection startup menus, because the uploaders wanted to use up all of the space on the floppy, so they crammed it up with 6-8 games each. You can still find these disk copies on certain C64/ATARI XL game torrents.
All the while SPA was still pushing anti-piracy commercials on PBS channels. “Don’t copy that floppy” was always their silly tagline.
And yea, once Napster turned into a household name, piracy was mainstream.


but is that something we can expect will ever revolutionize the economy? Can we replace the labor force with a technology which can’t do work but can convince the most credulous people that it can?
LLMs are a tool. You and I use tools. They are not a replacement for humans, and rich CEOs that say otherwise are greedy fucking morons.
It’s also untrue that it “can’t do work”. I literally just had several conversations with LLMs at work today to work through some programming tasks and troubleshooting issues. They can pour through details, logs, search results, code way faster that I can. I would be working a helluva lot slower if I didn’t have LLMs running tasks in the background while I go do other things, or review code it wrote, or talk through other support issues. I’ve been doing this shit for 20+ years, and I’ve never seen a technological leap this significant since the Internet.
Don’t use blockchain, crypto, metaverse, or “VR goggles” as comparison points. This is not something that is going to just magically go away.


This one just came out from Jamrock Hobo, one of the main Disco Elysium channels. I think he has been working on this for several months. To say that it explains everything is an understatement.
Oh, huh, this noclip one is recent, too. PMG’s second video did a really good breakdown of the ZA/UM drama. (A helluva lot better than their first attempt. PMG basically apologized for the tone and structure of that video.)


This is a technology community. LLMs are technology. If calling LLMs useful is considered glazing, then I’m not sure if you’ve eaten a proper doughnut.


LLMs are more like vr goggles with the force of the entire plutocracy pumping up the bubble.
The existence of a bubble doesn’t not mean the technology is useless. The internet had its own bubble 25 years ago. That doesn’t mean it was useless, just that people were investing in anything even remotely related to the Internet, including stupid websites and wasteful ideas.


A junior developer is fundamentally untrustworthy. That’s why you don’t give them access to the fucking prod database and backups.
AI is non-deterministic, sure, but selling these services with such a wide possibility space between “deterministic” and “random” behaviors is unethical and immoral.
We don’t know what the prompt and past input was. Maybe it wasn’t as “random” as you make it out to be. A company stupid enough to let LLMs touch their prod database is going to include a bunch of other stupid inputs.
You’re approaching this from the perspective of “all LLMs are bad so don’t use them”, which is its own version of unethical and immoral. A company that isn’t using LLMs is like a company not using the Internet.
LLMs are useful, everybody should use them to some capacity, and understanding a technology is far far better than spouting off ignorant bullshit like this.
Do yourself a favor: download a free model on HuggingFace, learn how they work, experiment with the technology on your own video card. It doesn’t have to be some super-powered video card. You can get models that fit in a 8GB card just fine.


This week, players are being asked to pay $25 for early access to Masters of Albion, a god game throwback that legendary designer Peter Molyneux (Populous, Dungeon Keeper, Black and White) says will be the last game he ever works on.
Also, Curiosity and Godus. Let’s not forgot about those “legendary” games.


Well, they have to hit the target on this one first.


Anna’s Archive should hire this same lawyer.


The NYT is now compromised. Has been for several years now.


I just want to be able to add PeerTube channels on my Lemmy feed. Unified ActivityPub protocol, my ass.


Well, I can wait for those games’ developers to stop losing their mind and remove the cancer from their game. They aren’t getting one cent from me, otherwise.


More like “AAA games are a big pile of shit, and only AAA games bother trying to fuck over the customer with Denuvo”.


I don’t need to install DeArrow. I just don’t subscribe to YouTubers that rely on clickbaity thumbnails and titles.


If that’s the conclusion you’re walking away with, then clearly you’re arguing in bad faith.


Having a face on them is one thing. Sometimes the person is the subject of the video, and it’s worth highlighting that in the video.
Having the person making the video staring at the camera, in a pose that says “look at how attractive I am” is another.
Also, this is not as hard and fast as you make it out. Here, I’ll post some random examples in my feed:
In fact, I had to dig to find videos that would really qualify:
Even (most) of the female YouTubers on my feed aren’t falling for it:
Meanwhile, LaurieWired’s entire catalogue is “look how attractive my face is!” This one doesn’t even have the subject in the thumbnail! It’s just her walking in a park. Also, what’s up with her videos from two years ago? Did she completely change her makeup style just to look more attractive in thumbnails?


All of her thumbnails are literally:
![]()
The same way for any other information resource like Wikipedia or some random Reddit post: trust but verify. Always review the code, point out mistakes, call out potential edge cases. Especially with newer thinking models, the hallucinations are minimal. It’s mostly just miscommunication in the request, which you can detect in the Thinking stream, stop, and re-correct. Rubberducking makes you better at communicating ideas in general, and providing enough context for the request is everything.
A lot of it has to do with the type of model you’re using, too, and having a decent global rules file tailored to how you want it to respond. If you don’t like how the model is responding, try out another one. If it’s some repeat mistake it makes, put it in a global rules file, or ask it to make a permanent memory.
Claude Opus does well at work, but is rather expensive for home use. I use Kimi reasoning models in Kagi for searching questions, and Qwen/GLM hybrid models for local use. It takes a bit of setup and tweaking to get the local stuff working, but LLMs are good at knowing how their own models work, so I just had Kimi help me out with some of the harder troubleshooting.