• 0 Posts
  • 1.04K Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Just realized that even if there is no mechanism to get the exact date from any of these age tracking systems, they’ll be able to infer the exact dates by just looking at when the user/device transitions to the next bracket. Then they’ll know the birthday for the start of that bracket falls somewhere between the last check and the current one.

    Though maybe that data can be poisoned by making it transition backwards occasionally, so it looks like the user is editing their age older and back or something. But, on the other hand, a lack of data or poisoned data is going to be a flag on its own at some point (if not already).





  • It’s not even a junior dev. It might “understand” a wider and deeper set of things than a junior dev does, but at least junior devs might have a sense of coherency to everything they build.

    I use gen AI at work (because they want me to) and holy shit is it “deceptive”. In quotes because it has no intent at all, but it is just good enough to make it seem like it mostly did what was asked, but you look closer and you’ll see it isn’t following any kind of paradigms, it’s still just predicting text.

    The amount of context it can include in those predictions is impressive, don’t get me wrong, but it has zero actual problem solving capability. What it appears to “solve” is just pattern matching the current problem to a previous one. Same thing with analysis, brainstorming, whatever activity can be labelled as “intelligent”.

    Hallucinations are just cases where it matches a pattern that isn’t based on truth (either mispredicting or predicting a lie). But also goes the other way where it misses patterns that are there, which is horrible for programming if you care at all about efficiency and accuracy.

    It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it. Or forgetting instructions (in a context window of 200k, a few lines can easily get drowned out).

    Code quality is going to suffer as AI gets adopted more and more. And I believe the problem is fundamental to the way LLMs work. The LLM-based patches I’ve seen so far aren’t going to fix it.

    Also, as much as it’s nice to not have to write a whole lot of code, my software dev skills aren’t being used very well. It’s like I’m babysitting an expert programmer with alzheimer’s but thinks they are still at their prime and don’t realize they’ve forgotten what they did 5 minutes ago, but my company pays them big money and get upset if we don’t use his expertise and probably intend to use my AI chat logs to train my replacement because everything I know can be parsed out of those conversations.





  • The first one would have been obvious by the time Europeans reached the Americas because reducing things to increase the intensity of flavours by removing water would have been a known cooking technique for a long time by then (and I’m guessing would have been figured out soon after the invention of pots). Then, it would have been a matter of someone who was aware of that technique tasting raw sap, realizing it was sweet, then trying to extract the sugar through reduction, then discovering it’s still pretty good as a syrup rather than dry sugar.

    And extracting sap from trees goes way back, as that’s what frankincense and myr were (and disappointing to find out these “precious substances” just smell like church).









  • Ah, that’s efficiency of use and depends more on how familiar you are with the software as well as the design and task. Like editing an image or video is going to be a lot easier with a gui than a command line interface (other than generating slop I guess).

    When people talk about how efficient software is, it’s usually referring more to the amount of resources it uses (including time) to run its processes.

    Eg an electron app is running a browser that is manipulating and rendering html elements running JavaScript (or other scripts/semi-compiled code). There is an interpreter that needs to process whatever code it is to do the manipulation and then an html renderer to turn that into an image to display on the screen. The interpreter and renderer run as machine code on the CPU, interacting with the window manager and the kernel.

    A native app doesn’t bother with the interpreter and html renderer and itself runs as machine code on the CPU and interacts with the window manager and kernel. This saves a bunch of memory, since there isn’t an intermediate html state that needs to be stored, and time by cutting out the interpreter and html render steps.