• 0 Posts
  • 990 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Just smell it (not just bay leaves but whatever). If it has a smell, that aroma can be infused into cooking, though you’ll want to make sure it’s edible before just throwing it into dishes.

    And you might need to sauté them for a bit (also called tempering) to infuse that aroma into oil, since it’s not all water soluable.




  • Lol I thought your link was “here’s a rocket designed by an LLM” rather than one designed by the non-LLM AI.

    LLMs are a local minimum that tech bros are stuck trying to optimize to a generally useful point because its language abilities are able to fool so many (just like how a real person talking with confidence can fool so many).

    This obsession with LLMs is making me question general human intelligence more lol. It’s looking more and more like we are just dumb apes but get lucky and every now and then a smart ape is born and teaches the other dumb apes how to bring their stupidity to whole new levels.


  • Anyone wanna take the bet that this guy secretly hopes it does, using himself as one example and that it means another common friend of an openly gay friend that OP is crushing on is also secretly in the closet?

    It just feels like there’s some layers of denial involved in this question for them to even wonder it.


  • Yeah, it’s good enough that it even had me fooled, despite all my “it just correlates words” comments. It was getting to the desired result, so I was starting to think that the framework around the agentic coding AIs was able to give it enough useful context to make the correlations useful, even if it wasn’t really thinking.

    But it’s really just a bunch of duct tape slapped over cracks in a leaky tank they want to put more water in. While it’s impressive how far it has come, the fundamental issues will always be there because it’s still accurate to call LLMs massive text predictors.

    The people who believe LLMs have achieved AGI are either just lying to try to prolong the bubble in the hopes of actually getting it to the singularity before it pops or are revealing their own lack of expertise because they either haven’t noticed the fundamental issues or think they are minor things that can be solved because any instance can be patched.

    But a) they can only be patched by people who know the correction (so the patches won’t happen in the bleeding edge until humans solve the problem they wanted AI to solve), and b) it will require an infinite number of these patches even to just cover all permutations of everything we do know.


  • Here’s an example I ran into, since work wants us to use AI to produce work stuff, whatever, they get to deal with the result.

    But I had asked it to add some debug code to verify that a process was working by saving the in memory result of that process to a file, so I could ensure the next step was even possible to do based on the output of the first step (because the second step was failing). Get the file output and it looks fine, other than missing some whitespace, but that’s ok.

    And then while debugging, it says the issue is the data for step 1 isn’t being passed to the function the calls if all. Wait, how can this be, the file looks fine? Oh when it added the debug code, it added a new code path that just calls the step 1 code (properly). Which does work for verifying step 1 on its own but not for verifying the actual code path.

    The code for this task is full of examples like that, almost as if it is intelligent but it’s using the genie model of being helpful where it tries to technically follow directions while subverting expectations anywhere it isn’t specified.

    Thinking about my overall task, I’m not sure using AI has saved time. It produces code that looks more like final code, but adds a lot of subtle unexpected issues on the way.


  • I’ve written a microkernel for an embedded device before and enjoy that kind of thing. I haven’t had to use any of my kernel experience in the year or so I’ve been on linux.

    My linux install (Fedora) took a while because I was reading up on a bunch of the options instead of just taking the defaults. Ended up mostly just using the defaults and the ones I did change, I kinda regret because the snapshots that I wanted to save disk space by avoiding would probably come in handy if I break something and don’t know how to fix it.





  • An alternative that will avoid the user agent trick is to curl | cat, which just prints the result of the first command to the console. curl >> filename.sh will write it to a script file that you can review and then mark executable and run if you deem it safe, which is safer than doing a curl | cat followed by a curl | bash (because it’s still possible for the 2nd curl to return a different set of commands).

    You can control the user agent with curl and spoof a browser’s user agent for one fetch, then a second fetch using the normal curl user agent and compare the results to detect malicious urls in an automated way.

    A command line analyzer tool would be nice for people who aren’t as familiar with the commands (and to defeat obfuscation) and arguments, though I believe the problem is NP, so it won’t likely ever be completely foolproof. Though maybe it can be if it is run in a sandbox to see what it does instead of just analyzed.




  • On the other hand, I suspect I’m on the spectrum but can’t relate to this at all. I love a variety of food and would get bored to death if I had to eat the same thing all the time.

    I also like new and unexpected textures in foods (unless I’m already familiar with it and know it isn’t supposed to have that texture). That shit’s delightful.

    But maybe it’s just the ADD overpowering anything else in the constant search for new sources of easy dopamine (while also overriding any urgency towards actually context shifting to eating or cooking mode).


  • Which is pretty crazy when you think about it, hitting a target about 1.3 lightseconds away. As in, if you could sight it, you’d be looking at where it was 1.3 seconds ago. Because it is moving at about 1km/s relative to us. And don’t read that as km/h, that’s one kilometer every second, so by the time you see it, it’s already about 1.3km from where you see it, so you need to lead it by about 2.6km to hit it but aim your sensor at the apparent image.

    Though it’s so far away that it doesn’t look that hard and the angle of difference between where you aim the laser and where you pick up the return signal is less than 0.00001 degree (thus you can solve that problem by ignoring it but still, just hitting that tiny distant moving target at all is impressive).


  • They don’t gaf about religion beyond the tribal shit and that it gives them a way to act better than those who disagree with them and appeal to an authority that will never contradict them.

    Edit: though also, there’s nothing in Christianity that I know of that says aliens can’t exist. The catholic church at least has stopped trying to oppose science, though still hasn’t recovered from the ink they got on their face regarding Galileo, but that was more about the worshop of Aristotle than anything directly related to the religion itself. The whole “the world is only 6k years old and fossils were placed there to test our faith!” isn’t supported by the catholic church. Not to defend them, “believes in science” is still a low bar.