Sorry but webcams are older than OF. There would have been many OF models that were already doing webcamming.
Sorry but webcams are older than OF. There would have been many OF models that were already doing webcamming.


Woah there, they didn’t add a keyboard button for AI. They replaced a button for it. My shitty windows laptop only has one ctrl key to add this other key I’ll only ever accidentally press, just like the first fucking windows key I didn’t want.


Just realized that even if there is no mechanism to get the exact date from any of these age tracking systems, they’ll be able to infer the exact dates by just looking at when the user/device transitions to the next bracket. Then they’ll know the birthday for the start of that bracket falls somewhere between the last check and the current one.
Though maybe that data can be poisoned by making it transition backwards occasionally, so it looks like the user is editing their age older and back or something. But, on the other hand, a lack of data or poisoned data is going to be a flag on its own at some point (if not already).


Not sure it will, as it would have to be able to handle users older than that, so wouldn’t have a reason for the default age to be that. Also depends on the UI (like my steam bday is something like jan 1 1900 because that’s the default age already entered).
I see that there’s an older version and a more recent remake (that may or may not have been cancelled). Guessing that young actor is in the original. Any comments on that?
Is that… Captain Jonathan Archer?


It’s not even a junior dev. It might “understand” a wider and deeper set of things than a junior dev does, but at least junior devs might have a sense of coherency to everything they build.
I use gen AI at work (because they want me to) and holy shit is it “deceptive”. In quotes because it has no intent at all, but it is just good enough to make it seem like it mostly did what was asked, but you look closer and you’ll see it isn’t following any kind of paradigms, it’s still just predicting text.
The amount of context it can include in those predictions is impressive, don’t get me wrong, but it has zero actual problem solving capability. What it appears to “solve” is just pattern matching the current problem to a previous one. Same thing with analysis, brainstorming, whatever activity can be labelled as “intelligent”.
Hallucinations are just cases where it matches a pattern that isn’t based on truth (either mispredicting or predicting a lie). But also goes the other way where it misses patterns that are there, which is horrible for programming if you care at all about efficiency and accuracy.
It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it. Or forgetting instructions (in a context window of 200k, a few lines can easily get drowned out).
Code quality is going to suffer as AI gets adopted more and more. And I believe the problem is fundamental to the way LLMs work. The LLM-based patches I’ve seen so far aren’t going to fix it.
Also, as much as it’s nice to not have to write a whole lot of code, my software dev skills aren’t being used very well. It’s like I’m babysitting an expert programmer with alzheimer’s but thinks they are still at their prime and don’t realize they’ve forgotten what they did 5 minutes ago, but my company pays them big money and get upset if we don’t use his expertise and probably intend to use my AI chat logs to train my replacement because everything I know can be parsed out of those conversations.


This is a big weakness in FOSS communities, hell, in capitalist existence. People with resources can afford to spend their own time or hire someone else to focus on their contributions like a full time job while most honest contributers will be doing it during their free time because they need to pay bills and such.


Or an alternate implementation of the API that fetches it to flag any programs that call it.


You must be the most dramatic person in the universe, calling that a “meltdown”.
The first one would have been obvious by the time Europeans reached the Americas because reducing things to increase the intensity of flavours by removing water would have been a known cooking technique for a long time by then (and I’m guessing would have been figured out soon after the invention of pots). Then, it would have been a matter of someone who was aware of that technique tasting raw sap, realizing it was sweet, then trying to extract the sugar through reduction, then discovering it’s still pretty good as a syrup rather than dry sugar.
And extracting sap from trees goes way back, as that’s what frankincense and myr were (and disappointing to find out these “precious substances” just smell like church).
Even better if it’s butter made from peanuts.


Actually, I think that’s windows 11. Though despite it never trying to get you to install win 11, it’s still worse than the one that does.


Of magma or plasma, whichever is most convenient.
Apparently the win 12 rumours were just a hoax. Even Microslop isn’t that out of touch (at this point in time).
Big fish in a small pond.
Guessing I’m not the only one in here that had a similar pathway with video games. Maybe games in general, as chess was similar.
Some common ones: worker, consumer, commuter, fan, voter, reader, viewer, subscriber…
The whole “sex object” complaint never made sense to me tbh. Like I’ve always wanted “sex object” to be a part of my identity. It’s great for non-sexual stuff to be appreciated, but I’d want my partner to just be horny for me at least for a portion of the time.


Ah, that’s efficiency of use and depends more on how familiar you are with the software as well as the design and task. Like editing an image or video is going to be a lot easier with a gui than a command line interface (other than generating slop I guess).
When people talk about how efficient software is, it’s usually referring more to the amount of resources it uses (including time) to run its processes.
Eg an electron app is running a browser that is manipulating and rendering html elements running JavaScript (or other scripts/semi-compiled code). There is an interpreter that needs to process whatever code it is to do the manipulation and then an html renderer to turn that into an image to display on the screen. The interpreter and renderer run as machine code on the CPU, interacting with the window manager and the kernel.
A native app doesn’t bother with the interpreter and html renderer and itself runs as machine code on the CPU and interacts with the window manager and kernel. This saves a bunch of memory, since there isn’t an intermediate html state that needs to be stored, and time by cutting out the interpreter and html render steps.
Personally, I prefer laziness as a motivator over greed. It’s much more likely to lead to low maintenance solutions that still keep maintenance as an option over replacement.
Though I’ve curated my laziness to the point where I’ll do chores out of laziness becuase I know they’ll be more work later.
But I’d also spend 2 days writing a script to avoid spending 2 hours doing something tedious.