

Hmm, is the last staff thing just the death message from Sif Muna? I seriously don’t play often enough with Sif Muna, because Heplhjdtfhxhdh always seems so good… 🥴


Hmm, is the last staff thing just the death message from Sif Muna? I seriously don’t play often enough with Sif Muna, because Heplhjdtfhxhdh always seems so good… 🥴


Yeah, in particular, anything close to 100 million users presumes that non-gamedevs will use this. For anything beyond simple variations of existing games, like e.g. “Skyrim with spears”, you need to have an actual understanding of game design. It is not enough to have cool ideas.
So, I really don’t see many non-gamedevs using this. Especially when they can pay less to play a properly designed game.


Yeah, I can understand the frustration when an external decision forces you to disappoint some of your users, but ultimately you have to pick your battles. When neither the Python nor Rust ecosystem thinks those platforms are worth supporting, it’s probably not either worth it for you to worry…


Well, if he just got his tenure at 40, that means he presumably did something else in his life before tackling this path. Him saying the students can just call him Jeff is also maybe linked to him having still been a student until recently. I assume, it’s a case of this being funny, if you actually study history and know a professor like that. 🫠


I think, it’s just used as in “late bloomer”, so someone who needed a bit longer, but now found their true potential. “Bloom” as in the thing flowers do.


The average lunar distance is approximately 385,000 km (239,000 mi), or 1.3 light-seconds.
https://en.wikipedia.org/wiki/Lunar_distance
*Moves mouse*
One Mississippi, two Mississippi, three Mississi
*Mouse cursor moves*
🥴
And 3 months later, i have not booted it once
Oh man, I know the feeling. It took me 5 months to remember that I had a Windows partition.
It was so important to me, to have a way back (which is fair enough), and then I just completely forgot about it.


The problem is that in this case, the LLM just naively auto-completes a password from what it knows a password to most likely look like.
It is possible to enable an LLM to call external tools and to provide it with instructions, so that it’s likely to auto-complete the tool call instead. Then you could have it call a tool to generate a correct horse battery staple, or a completely random password by e.g. calling the pwgen command on Linux.
But yeah, that just isn’t what this article is about. It’s specifically about cases where an LLM is used without tool calls and therefore naively auto-completes the most likely password-like string.


I imagine, it’s a matter of asking it to generate some configuration and one of the fields in that configuration is for a password, so the LLM just auto-completes what a password is most likely to look like.
It’s Apple’s programming language, kind of intended as a successor to Objective-C.
From what I hear, it’s actually decently designed and has quite a few similarities to Rust. Still not sure, how great it is outside of the Apple ecosystem…
My instance went down, so I’m way too late to make this joke, but anyways:
We’re not cantankerous, just a little …crabby. 🙃
I hear, it helps with saving up for treatment by not paying for nudes. 🥴


Considering their policy for the majority of their existence has been that open-source is cancer, it might as well be viewed like that. Just buy the central open-source exchange platform and slowly make it worse to hurt all of open-source.


Yeah, I might block a contributor on sight, if they post something like that.


In case, you’re not aware, you can also email the dev. You can code up your commits as normal and then use e.g. git format-patch -3 to put the last 3 commits patch files. You can then attach those files to an e-mail and the dev can apply those patches with git am.
It takes a bit of playing around, but it’s actually really easy.
The Linux kernel, one of the most complex projects on the planet, develops like this.


I think, you could open the same file multiple times and then just skip ahead by some number of bytes before you start reading.
But yeah, no idea if this would actually be efficient. The bottleneck is likely still the hard drive and trying to fit multiple sections of the file into RAM might end up being worse than reading linearly…


Yeah, and the worst part is that submitting the PR is trivial. You just offload the reviewing work onto the maintainer and then feed the review comments back into the AI. Effectively, you’re making the maintainer talk to the AI, by going through you as a middleman, a.k.a. completely wasting their time.
I don’t feel like these positions are at odds with one another, unless you become active in reducing the number of humans, of course.
Like, you can uplift and protect people by stopping them from killing their environment, because you recognize that people are an invasive species that will do that.
I’ve been wondering, if you could combine LLMs with a logic programming language like Prolog. The latter is actually able to reason through things, you “just” have to express them in Prolog facts and rules.
Well, from doing a quick online search, I’m most certainly not the first person to think of this, which does not surprise me at all…