• 1 Post
  • 44 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle


  • Yeah, the Flint 3 seems like a worse overall router when it comes to computational power and chipset. The only thing it has going for it is WiFi 7 (instead of 6) and 2.5 G ethernet on all ports. The Flint 3 is also more power hungry, which isn’t great given the high energy costs in Europe.

    Most people don’t benefit from WiFi 7 (WiFi 6 is already good enough for almost everything) and if you want more than 2x 2.5G ports, consider getting a (managed) switch to extend the router with.




  • GL.Inet products that use Mediatek chipsets are great since you can usually flash standard OpenWRT on them. I would avoid routers with different chipsets since they are unlikely to get proper support.

    (Though I can’t say that my MT-6000 is cheap, but it is an extremely capable router. That is top of the line though, they have cheaper stuff.)






  • V2 is about Nehalem. V3 is approximately Haswell (iirc it corresponds to some least common denominator of AMD and Intel from around that time). V4 needs AVX512 (that is really the only difference in enabled instructions compared to V3).

    Both my daily driver computers can do v3, but not v4. (I like retro computing, so I also have far older computers that can’t even do 64-bit at all, but I don’t run modern software on those for the most part.)


  • I think a lot of modern software is bloated. I remember when GUI programs used to fit on a floppy or two. Nowdays we have bloated electron programs taking hundreds of MB of RAM just to show a simple text editor, because it drags a whole browser with it.

    I love snappy software, and while I don’t think we need to go back to programs fitting on a single floppy and using hundreds of KB of RAM, the pendulum does need to swing back a fair bit. I rewrote some CLI programs in the last few years that I found slow (one my own previously written in Python, the other written in C++ but not properly designed for speed). I used Rust, which sure helped compared to Python, but the real key was thinking carefully about the data structures used up front and designing for performance. And lots of profiling and benchmarking as I went along.

    The results? The python program was sped up by 50x, the C++ program by 320x. In both cases it changed these from “irritating delay” to “functionally instant for human perception”.

    The two programs:

    And I also rewrote a program I used to manage Arch Linux configs (written in bash) in Rust. I also added features I wanted so it was never directly comparable (and I don’t have numbers), but it made “apply configs to system” take seconds instead of minutes, with several additional features as well. (https://github.com/VorpalBlade/paketkoll/tree/main/crates/konfigkoll)

    Oh and want a faster way to check file integrity vs the package manager on your Linux distro? Did that too.

    Now what was the point I was making again? Maybe I’m just sensitive to slow software. I disable all animations in GUIs after all, all those milliseconds of waiting adds up over the years. Computers are amazingly fast these days, we shouldn’t make them slower than they have to be. So I think far more software should count as performance critical. Anything a human has to wait for should be.

    Faster software is more efficient as well, using less electricity, making your phone/laptop battery last longer (since the CPU can go back to sleep sooner). And saves you money in the cloud. Imagine if you could save 30-50% on your cloud bill by renting fewer resources? Over the last few years I have seen multiple reports of this happening when companies rewrite in Rust (C++ would also do this, but why would you want to move to C++ these days?). And hyperscalers save millions in electricity by optimising their logging library by just a few percent.

    Most modern software on modern CPUs is bottlenecked on memory bandwidth, so it makes sense to spend effort on data representation. Sure start with some basic profiling to find obvious stupid things (all non-trivial software that hasn’t been optimised has stupid things), but once you exhausted that, you need to look at memory layout.

    (My dayjob involves hard realtime embedded software. No, I swear that is unrelated to this.)


  • As far as I know they do a few things (but it is hard to find a comprehensive list), including build packages for newer microarchitectures such as the aforementioned x86-64-v3. The default on x86-64 Linux is still to build programs that work on the original AMD Athlon 64 from the early 2000s. That really doesn’t make sense any more, and v3 is a good default that still covers the last several years of CPUs.

    There are many interesting added instructions and for some programs it can make a large difference, but that will vary wildly from program to program. Phoronix has also done some benchmarks of Arch vs Cachy, and since Phoronix Test Suit mostly uses it’s own binaries, what that shows is the difference that the kernel, glibc and system tuning alone makes. And those results do look promising.

    I don’t want to spill some memes worth Arch elitism here, but I just doubt Arch derivatives crowd knows what x86-64-v3 thing is. Truth be told, I barely understand that myself.

    I think you just did show a lot of elitism and arrogance there. I expect software developers working on any distro to know about this, but not necessarily the users of said distros. (For me, knowing about low level optimisation is part of my dayjob.)

    Also, for Cachy in particular they do seem to have some decent developers. One of their devs is the guy who maintains the legacy nvidia drivers on AUR, which involves a fair bit of kernel programming to adapt to changes in new kernel releases (nvidia themselves no longer do so after the first year of drivers becoming legacy).



  • XOR lists are obscure and cursed but cool. And not useful on modern hardware as the CPU can’t predict access patterns. They date from a time when every byte of memory counted and CPUs didn’t have pipelines.

    (In general, all linked lists or trees are terrible for performance on modern CPUs. Prefer vectors or btrees with large fanout factors. There are some niche use cases still for linked lists in for example kernels, but unless you know exactly what you are doing you shouldn’t use linked data structures.)

    EDIT: Fixed spelling






  • Vorpal@programming.devtoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 months ago

    Agreed, I run arch on my desktop and laptop, because it is more stable (in the sense of fewer bugs, things like suspend/resume works reliably for example) than any other distro I have used.

    But on my VPS and my Pi I run Debian because it is more stable (in the sense of fewer upgrades that could break things). I can enable unattended upgrades there, which I would never do on my Arch system (though it is incredibly rare for those to break).

    Also: if someone said they were a (self proclaimed) “semi noob” I would not recommend Arch. I have used Linux since 2002, and as my main OS since 2006. (Furthermore I’m a software developer in C/C++/Rust.) While Arch is a great distro, don’t start with Arch.