

Yes, I just glossed over that detail by saying “similar to”, but that is a more accurate explanation.
I keep picking instances that don’t last. I’m formerly known as:
@EpeeGnome@lemm.ee
@EpeeGnome@lemmy.fmhy.net
@EpeeGnome@lemmy.antemeridiem.xyz
@EpeeGnome@lemmy.fmhy.ml


Yes, I just glossed over that detail by saying “similar to”, but that is a more accurate explanation.


Unfortunately the most probable response to a question is an authoritative answer, so that’s what usually comes out of them. They don’t actually know what they do or don’t know. If they happen to describe themselves accurately, it’s only because a similar description was in the training data, or they where specifically instructed to answer that way.
Dumpster concerns aside, I think these count as feral yeast.


A lot of it isn’t new, or is new but wasn’t published because it was redundant at the time. Everyone who wasn’t wearing blinders already knew Trump is a pedo creep who was best friends with Epstein. Now, finally, thanks to Trump’s broken promise on releasing “the Epstein files,” those blinders are off, or at least weakened and so it’s actually worthwhile to put this stuff out there again. Framing it like it’s some new revelation will get some of the people who ignored it before to maybe consider it this time around.


It wasn’t the user’s infrastructure, it was the LLM company’s. The selling point is that it’s all integrated together for you. You explain what you want, the LLM not only codes it, but launches it too. Yes, his screen shots of the LLM “taking responsibility” are idiotic, but so many people don’t understand that LLMs don’t actually understand anything.


“Basically their methodology was that they asked ChatGPT whether the job could be automated,” he explained. “They also asked people whether the job could be automated and then they said ChatGPT and people agreed some portion of the time.”
lol. This is such an idiotic thing to do. “Hey, you know that linguistic pattern matcher that doesn’t actually reason or introspect? Since it can talk, why don’t we just ask it what it can and can’t do?” Seeing this, published by an AI research institute no less, is what inspired the creators of the actually riggourus test the article is about. It inspired me with a desire to smack those idiots upside their empty heads.
People think the stem is too tough, but it just needs to be cooked right. The trick is to start cooking the stem pieces first, then when they just start to be cooked, add the florets. Exact timing depends on the cooking method, but if done right all of it will be tender and tasty.

If I recall correctly, the follow up was the same person complaining about being painfully constipated for several days.

It’s been a while, so I must have exaggerated it in my mind. Three days certainly sounds more plausible.
There is no lie. It’s just tough to admit that it’s fairly accurate.