• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • I don’t understand how people can look at the insane progress gpt has made in the last 3 years and just arbitrarily decide that this is its maximum capability.

    So this is not entirely arbitrary, and probably part of it is also that they’re not just looking at the progress, but also at systemic issues.

    For example we know that larger models with more training material are more powerful. That’s probably the biggest contributing factor to the insane pace at which they’ve developed. But we’re also at a point where AI companies are saying they are running out of data. The models we have now are already trained on basically the entire open internet and a lot of non-public data too. Therefore we can’t expect their capabilities to scale with more data unless we find ways to get humans to generate more data. At the same time the quality of data on the open internet decreases because more of it is generated by AI.

    On the other hand, making them larger also has physical requirements, most of all power. We are already at a point where AI companies are buying nuclear power plants for their data centers. So scaling in this way is close to the limit too. Building new nuclear power plants takes ages.

    Another different thing is that LLMs can’t learn. They don’t have to be able to learn to be useful, obviously we can use the current ones just fine at least for some tasks. But nonetheless this is something that limits the progress that’s possible for them.

    And then there is the entire AI bubble thing. The economical side of things, where we have an entire circular economy based on the idea that companies like OpenAI can spend billions on data centers. But they are losing money. Pretty much none of the AI companies are profitable other than the ones that only provide the infrastructure. Right now investors are scared enough to miss out on AGI to continue investing but if they stopped, it would be over.

    And all this is super fragile. The current big players are all using the same approach. If one company makes that next step and finds a better approach than transformer LLMs, the others are toast. Or if some Chinese company makes a breakthrough with energy usage again. Or if there is a hardware breakthrough and the incentive to pay for hosted LLMs goes away. Basically even progress can pop the bubble because if we can all run AI that does a good enough job at home then the AI companies will never hit their revenue targets. And then the investment stops and companies that bleed billions every quarter without investors backing them can die very quickly.

    Personally I don’t think they will stop becoming better right now. Even if they do stop, I’m not convinced we understand them well enough to be unable to improve the ways in which we use them a bit more. But when people say that this is the peak, they’re looking at the bigger picture. They say that LLMs can’t get closer to human intelligence because fundamentally, we don’t have a way to make them learn, they say that the development model is not sustainable, and other reasons like that.



  • It’s definitely a mixed vibes game :)

    I think it’s decent as a roguelite shooter. But it’s a bit slow and you have to spend some time playing to appreciate it. For example I saw some reviews complaining that there are only four ranged and three melee weapons. Which is true but completely disregards how different the variants of each can be. For example there are normal grenade launchers but there also are launchers that shoot large plasma balls that remain where they land for a short time and do damage over time, there are launchers that shoot sticky timed explosives, etc., and then among those, some will shoot one at a time, some will shoot a whole set of bombs in one burst, some will shoot smaller projectiles but really really fast…

    The melee weapons are a bit unbalanced though. Simply because the axe, the stereotypical slow heavy hitter weapon, can generally kill normal enemies in one hit and the other melee weapons, while not weak on paper, can’t do that. So quite often the axe is just the most convenient.

    They did a good job making us use both ranged and melee weapons though with the energy system, and in addition we also have a drone with one out of several support functions, a shield, and dash, and dodge, and a super move for each weapon. So overall there is good variety in combat.

    They did make some puzzling decisions too where it sometimes feels like they don’t understand the genre they picked. For a roguelite, the individual runs can be too long. If we beat a boss, we get a shortcut to that level, but as a consumable that can be used only once. At the same time the upgrades between runs take a lot of materials so skipping a few floors might not even pay off in the end. And if we die we lose some of the materials we collected, too (this can be eliminated with some upgrades relatively early, but who does this in a roguelite…) And the biggest one is just the price. For a roguelite it’s fairly expensive.

    The setting is pretty far out too. For me a huge plus just because it’s not the same as every other game I’ve been playing recently. I play so many fantasy games that playing a cyborg who live streams shooting up a company in a dystopian future feels really fresh.



  • I have to do a bunch of relatively unsurmountable steps to do what should’ve taken half a minute. Like screenshot the profile and scrape the text with iOS Photos text recognition.

    The iOS workaround isn’t quite as unsurmountable as you don’t have to go through the Photos app at all. You can enter text selection mode directly from the screenshot without even saving it or leaving the app you’re in. And since iOS will look up any word you can select in the system dictionary and also translate any text you can select, you can do these things right there too.

    That said I did once make a shortcut that lets me triple tap the back of my phone to pop up a text version of everything on screen that the iOS OCR detects. Not sure what I did that for though, I don’t really use it.


  • I tried it and it’s way off for me because it gives too much weight to submitted posts. I don’t have very many submissions so even when I selected recent only, it focused on one guide post for a game I wrote many years ago and made the profile 80% about that. But I guess that’s a problem at some point before the LLM is involved. There are some other similarly non-LLM problems too like making the most used terms section list almost only subreddit names.

    When I limited it to recent comments only it did a better job. It even listed “Humanity’s general incompetence” as the fifth of my “top 3” topics.



  • I’m not sure about this… the build systems I work with are either so complex that something like cmake gets used because that way we can support multiple build systems on different platforms without maintaining five different projects, or they are so simple that it frankly doesn’t matter whether it’s make or something else.

    And in the former case, i.e. the only case where I even care what tool we use, what I care about the most is speed, so I would want something like ninja instead of make (which cmake conveniently supports). I don’t want to micro-optimize by doing everything myself with redo.




  • I had one of those laptops (a PowerBook). Yes, it had two slots that could be used for batteries. But that meant taking out the CD drive. Modern laptops don’t have that anymore so I’m not sure where the room for another battery would come from. The other thing is, it lasted at best 4-5 hours on one battery when doing light work. Its modern counterparts last 10-15 hours on one battery.

    The same thing actually happened with phones. But now we literally can’t spend half a minute not looking at them and we also play energy hungry games on them etc. You can still get a phone with replaceable battery though, e.g. Fairphone or Volla.


  • IMHO, LISP is ok for theoretical fundamentals but it won’t necessarily get you a practical understanding of how computers work. Functional languages are more like how mathematicians wish computers worked. All programming languages are abstractions, but functional languages abstract away how the underlying hardware works in ways that procedural languages don’t.

    And Java is pretty useful if you want to get a job but if you don’t want that, then there are less painful options. The difference to Python is mostly that Java often feels like it was intentionally made annoying to use. But it’s a pretty high level language, I wouldn’t call it more fundamental or basic than Python. Java wins on performance but that has nothing to do with how high level it is.

    For practical fundamentals, if you actually want those, I’d recommend starting with microcontrollers and their assembly. Modern CPUs are so complex that learning fundamentals from scratch with assembly is quite difficult. But with smaller/older microcontrollers (like PIC or something) it’s both more approachable and more useful. It’s almost a shame that hobbyist microcontroller platforms are pretty advanced now too. But you can move on to C first, you can ignore MicroPython for a while if you want.

    If you want a game way to learn absolute basics, there is a game called Turing Complete. It basically teaches you how to build your own CPU architecture from logic gates. You start with the basics about logic circuits and eventually build a simple CPU, and then another more complex one. I think this actually goes a bit too far in terms of fundamentals, it will take you forever to learn how to even make something write Hello World on a screen. But I guess this is the closest to how I started out. Of course for me the logic part was theory only. And the CPU I eventually learned programming first was a 6502 and not something I designed myself.






  • In 2004 I was still running a Usenet server. Online games were run by the community too. I spent so much time on MUDs.

    It seems like now we are in this cycle where someone builds something shinier and fancier, it briefly becomes the next best thing, and then they find out it can’t make money (or just survive) unless it becomes significantly worse, and then the next best thing appears. But because of all the steps back there is little real progress. Lemmy too is, functionally, not that different from Usenet. It has pictures and votes and is generally more modern. But what I see highlighted in contrast to reddit is that it’s distributed. Like Usenet. It’s not supposed to be a breakthrough but after reddit it feels like one.