







Meh, go play Terraria. It has less baggage.


At my work, anybody can have Windows, Mac, or Linux. Each have an approved set of software that they can use. If it’s not on the approved list, and it’s something freely available and gets regular security updates, it’s usually not a problem to get it on the list.
I don’t have to explain to my co-workers what software I use. Most of the time, it’s cloud-based or web-based and universal, anyway.


Okay, so you’re just a straight up “right-wing” warmonger yourself.
Far from it. You seem unnecessarily reactionary, ready to hard-accuse anybody that doesn’t agree 100% with your ideals. Again, absolutism is a bad thing, and the world is far more nuanced than your beliefs give it credit for.
You do not know me. Do not pretend that you do.
https://en.wikipedia.org/wiki/Civilian_casualties_from_the_United_States_drone_strikes
I could just as easily produce civilian casualties from the Iraq War, the Vietnam war, WWII, whatever. Shit, the US can never repay the terrible terrible debt it caused against the citizens of Laos.
None of that damage was caused by drones. The weapons change, but the horrible military decisions, unfortunately, do not.
https://www.aljazeera.com/news/2026/2/28/what-countries-has-trump-attacked-since-returning-to-office
This has nothing to do with LLMs. These are human decisions, made by terrible human beings that deserve to get shoved into the frontlines like the draft dodgers they are.


Didn’t someone at Google write a memo that was like “we’re kinda fucked b/c you can re-create this stuff with enough resources” like 2 years ago?
Basically, yes. They were specifically decrying the amount of open-sourcing they and their American competitors were doing, because capitalism, of course. Around this time, we had examples like StabilityAI’s StableDiffusion and Meta’s LLaMA as open-source models. And around this time, everybody else started closing their models, despite the fact that the research kept on going out in the open. StabilityAI kept their models open, mostly because they had no choice, but the attitude shifted towards profitability.
So, China took the open-source mantle, and these open/closed lines are being drawn strictly around national divisions as this American vs. China slant. Which is mostly a diversion of the real battle.


Whoever wrote this article didn’t even bother to do the most basic of research.
DeepSeek fully admitted they started with ChatGPT outputs to train its model. And then they released it as an open-source model, so that everybody else can “steal” their work. On the image/video front, the general public has created every possible variation on top of every model you can think of. On top of that, any model that has ever been released with full weights has been spun into whatever variation or VRAM size you want.
The ugly truth that the American companies want to hide is the fact that they are spending trillions of dollars on an oligopoly that they can’t keep long-term. They hope that they can just keep spending more money to add more billions of parameters to their models, and keep technologically competitive with the secondary open-source models. But, they’ve already ran into diminishing returns over a year ago, and the global compute sector physically cannot keep up with demand for another cycle of even more diminishing returns.
The other factor is that realistic miniaturization of models is already here. Some of the smaller sizes aren’t as effective as the 250GB models they use on cloud-based services, but you can still do a lot with a 16GB or 24GB video card, using models of those sizes. Optimization and LLM quantization is getting better and better each year. The AI bubble burst is going to force a cascade shift into a new era of localization. Everybody is sick to fucking death of renting and subscribing to everything. Us pirates already do so on the media front, and soon localization of LLMs is going to become way more popular.
The question isn’t “Can people steal the tech?”. It’s “how long will people notice that it’s already happening?”


I can’t blame you because the media is complicit, and everybody loves a story of good guy versus bad guy, but this is the reality:
No, I acknowledge that the world is a helluva lot more nuanced than “AI bad, military bad, absolute stances good”. Absolutism is what we accuse our smooth-brained right-winged asshats of doing, so we certainly shouldn’t be caught doing the same thing.
Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy.
You mean drones? You’re talking about drones. What’s wrong with drones?
We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
They had a contract with the Pentagon. They literally deal with military operations on a regular basis.
Hell, most of the pivotal technology developed in the last thousand years started as a military invention before civilian use. Including this internet thing you’re arguing on right now.


I don’t think it’s fair to take the two items they were fighting against, and warping around the details, as a statement of acceptance, especially given how Stinky Pete wouldn’t even budge on those two.
Anthropic took a stand on a couple of guardrails, that you or I would consider to be very basic acceptable rules, but that the rest of their competition immediately lambasted, at the risk of their Pentagon contract, and even a national blacklisting that could completely take down their company. They lost, but at least they didn’t back down. And they are still at risk of being blackballed, because we live in a dictatorship where some fuckhead president can write whatever EO he wants without Congressional approval.
I’m sure as fuck ain’t going to call Dario a hero, or ignore all of the shit Anthropic did to get this far. But, I am going to call out when a company and CEO sticks their neck out in front of a crazed axe-wielding executioner, in an environment where everybody else is cowering behind the curtains and closets.


ChatGPT is crap, compared to Claude Opus. Claude does a helluva lot better with any of the programming tasks I throw at it.
Trump and Stinky Pete choosing the worst decision as usual. I hope they don’t follow through with all of their threats of putting Anthropic on a security threat list like Huawei, but if they do, maybe that prompts Anthropic to do the right thing and open-source the model.
Domain search suffixes



(I could use that in so many places…)


Water is wet. Sky is blue. News at 11.


Has any pirate site ever been trustworthy?
Yes. Next question.


Tell them to pound sand.


Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.
This is a total lie. This has nothing to do with AI. They’ve hated archive sites because forums like this one hate their paywalls, and we prefer to be able to actually read their articles and discuss them instead of getting blackballed every time.
NYT is one of the worst offenders, and NYT as a company has turned for the worse in the last 5-10 years, maybe even worse than Amazon Post. None of the old media companies really understand how to adapt in the Internet age, so they are slowly dying. It’s like they are perpetually in an economic bubble that hasn’t figured out how to pop itself. There’s so much damn news and news places copying their own news, and regurgitating it a hundred times, that we’re forced to aggregate it and have YouTubers hawk shit like Ground News just to process it all.


Yeah, this is walking one thing back to get away with their still-bad behaviors.


“Distances” isn’t the same as “severing ties from”.
Cancel the contract, you fucks!


open-weights aren’t open-source.
This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.
As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.
Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.
Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.