• 0 Posts
  • 89 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Sure. Almost 40 years ago I started learning to program as a kid, and the only reason I knew the word “syntax” at all was because the default error message in my computer’s BASIC interpreter was “SYNTAX ERROR”. I didn’t learn what it actually meant until many years later, in English class.

    I taught myself with the excellent Usborne books, which are now all downloadable for free from their website. You won’t be able to use them as-is (unless you get your kids to use an emulator for an old 8-bit home computer), but I’m sure you can still get some useful ideas, and maybe even copy small sections here and there.

    As others have mentioned, my school also taught us a little LOGO, which was a bit of fun for me but rather simple. I remember that most of my classmates enjoyed it, though.


  • How many techie types have had someone come to them and say something like “Hey, you know tech thing XYZ? You know how it sucks? Well I’ve got a great idea: make a BETTER one! So what do you say? You whip it up in an afternoon, I’ll handle marketing, and we’ll be rich!”

    Like they really thought that the issue is just that no-one can see the flaws. They thought that the fix is super easy and they’re just the first person clever enough to see it.






  • the boss can detect headphones going on your head and music starting from 50 feet away and instantly be behind you with a burning question that doesn’t make any sense.

    I’m sure you realize that the question doesn’t make any sense because they had to think of it on the spot, just to prove that you can’t wear headphones in the office due to all the important ambient office talk you need to be a part of.

    One of my best, most competent bosses once said to the team “I don’t understand how you guys can work while listening to music, but as long as your output stays high, I’m not going to interfere.”



  • How about that worst of both worlds, the tutorial where the author starts out writing as if their audience only barely knows what a computer is, gets fed up partway through, and vomits out the rest in a more obtuse and less complete form than they would’ve otherwise?

    1. Turn on your computer. Make sure you turn on the “PC” (the big box part) as well as the “monitor” (TV-like part).

    2. Once your computer is ready and you can see the desktop, open your web browser. This might be called “Chrome”, “Safari”, “Edge”, or something else. It’s the same program you open to use “the Google”.

    3. In the little bar near the top of the window where you can write things, type “https://www.someboguswebsite.corn/download/getbogus.html” and press the Enter key.

    4. Download the software and unarchive it to a new directory in your borklaving software with the appropriate naming convention.

    5. Edit the init file to match your frooping setup.

    6. If you’re using Fnerp then you might need to switch off autoglomping. Other suites need other settings.

    7. Use the thing. You know, the thing that makes the stuff work right. Whatever.

    Congratulations! You’re ready to go!







  • The PlayStation 1 had a copy protection system that measured physical properties of the disc which couldn’t be replicated by normal CD writers. There were a few ways to get around this, but to be able to put a burned CD into your console and boot directly from it into the game (as usual) required the installation of a fairly complex mod chip. A lot of people alternatively used the “swap trick”, which is how I used to play my imported original games.

    The DreamCast’s copy protection was heavily reliant on using dual-layer GD-ROM discs rather than regular CDs, even though they look the same to the naked eye. There were other checks in place as well, but simply using GD-ROMs was pretty effective in and of itself.

    Unfortunately, Sega also added support for a thing called “MIL-CD” to the DreamCast. MIL-CD was intended to allow regular music CDs to include interactive multimedia components when played on the console. However, MIL-CD was supported for otherwise completely standard CDs, including burned CDs, and had no copy protection, because Sega wanted to make it as easy as possible for other companies to make MIL-CDs, so the format could spread and hopefully become popular. Someone found a way to “break out” of the MIL-CD system and take over the console to run arbitrary code like a regular, officially released game, and that was the end of DreamCast’s copy protection. People couldn’t just copy an original game disc 1:1 and have it work; some work had to be done on the game to put it on a burned CD and still have it run (sometimes quite a lot of work, actually), but no console modification was needed. Anyone with a DreamCast relased before Sega patched this issue (which seems to be most of them) can simply burn a CD and play it on their console, provided they can get a cracked copy of the game.



  • I would also probably try to plug USB drives in once a year or so if I were being diligent, but in reality I recently found a handful of USB flash drives that I’d stored in a box in my parents’ unattached garage, and every one of them could be read completely without any issues. They ran the gamut of build quality from expensive, name-brand drives to no-name dollar-store keychains. They’d been sitting in that box, untouched, for a little over nine years, and I’m pretty sure that some of them hadn’t been used for several years even before that.

    I wouldn’t rely on it for critical data, but USB flash might not be so terrible.


  • Go for it, if it’s to satisfy your own curiosity, but there’s virtually no practical use for it these days. I had a personal interest in it at uni, and a project involving coding in assembly for an imaginary processor was a small part of one optional CS course. Over the years I’ve dabbled with asm for 32-bit Intel PCs and various retro consoles; at the moment I’m writing something for the Atari 2600.

    In the past, assembly was useful for squeezing performance out of low-powered and embedded systems, but now that “embedded” includes SoCs with clock speeds in the hundreds of MHz and several megabytes of RAM, and optimizing compilers have improved greatly, the tiny potential performance gain (and you have to be very good at it before you’ll be able to match or do better than most optimizing compilers) is almost always outweighed by the overhead of hand-writing and maintaining assembly language.


  • I’m in a similar boat to you. I ripped almost all of my CDs to 320kbps mp3s for portability, but then I wanted to put all of them (a substantial number) plus a bunch more (my partner’s collection) on a physically tiny USB stick (that I already had) to just leave plugged into our car stereo’s spare port. I had to shrink the files somehow to make them all fit, so I used ffmpeg and a little bash file logic to keep the files as mp3s, but reduce the bitrate.

    128kbps mp3 is passable for most music, which is why the commercial industry focused on it in the early days. However, if your music has much “dirty” sound in it, like loud drums and cymbals or overdriven electric guitars, 128kbps tends to alias them somewhat and make them sound weird. If you stick to mp3 I’d recommend at least 160kbps, or better, 192kbps. If you can use variable bit rate, that can be even better.

    Of course, even 320kbps mp3 isn’t going to satisfy audiophiles, but it sounds like you just want to have all your music with you at all times as a better alternative to radio, and your storage space is limited, similar to me.

    As regards transcoding, you may run into some aliasing issues if you try to switch from one codec to another without also dropping a considerable amount of detail. But unless I’ve misunderstood how most lossy audio compression works, taking an mp3 from a higher to a lower bitrate isn’t transcoding, and should give you the same result as encoding the original lossless source at the lower bitrate. Psychoacoustic models split a sound source into thousands of tiny component sounds, and keep only the top X “most important” components. If you later reduce that to the top Y most important components by reducing the bitrate (while using the same codec), shouldn’t that be the same as just taking the top Y most important components from the original, full group?


  • I’m not too knowledgeable about the detailed workings of the latest hardware and APIs, but I’ll outline a bit of history that may make things easier to absorb.

    Back In the early 1980s, IBM was still setting the base designs and interfaces for PCs. The last video card they relased which was an accepted standard was VGA. It was a standard because no matter whether the system your software was running on had an original IBM VGA card or a clone, you knew that calling interrupt X with parameters Y and Z would have the same result. You knew that in 320x200 mode (you knew that there would be a 320x200 mode) you could write to the display buffer at memory location ABC, and that what you wrote needed to be bytes that indexed a colour table at another fixed address in the memory space, and that the ordering of pixels in memory was left-to-right, then top-to-bottom. It was all very direct, without any middleware or software APIs.

    But IBM dragged their feet over releasing a new video card to replace VGA. They believed that VGA still had plenty of life in it. The clone manufacturers started adding little extras to their VGA clones. More resolutions, extra hardware backbuffers, extended palettes, and the like. Eventually the clone manufacturers got sick of waiting and started releasing what became known as “Super VGA” cards. They were backwards compatible with VGA BIOS interrupts and data structures, but offered even further enhancements over VGA.

    The problem for software support was that it was a bit of a wild west in terms of interfaces. The market quickly solidified around a handful of “standard” SVGA resolutions and colour depths, but under the hood every card had quite different programming interfaces, even between different cards from the same manufacturer. For a while, programmers figured out tricky ways to detect which card a user had installed, and/or let the user select their card in an ANSI text-based setup utility.

    Eventually, VESA standards were created, and various libraries and drivers were produced that took a lot of this load off the shoulders of application and game programmers. We could make a standardised call to the VESA library, and it would have (virtually) every video card perform the same action (if possible, or return an error code if not). The VESA libraries could also tell us where and in what format the card expected to receive its writes, so we could keep most of the speed of direct access. This was mostly still in MS-DOS, although Windows also had video drivers (for its own use, not exposed to third-party software) at the time.

    Fast-forward to the introduction of hardware 3D acceleration into consumer PCs. This was after the release of Windows 95 (sorry, I’m going to be PC-centric here, but 1: it’s what I know, and 2: I doubt that Apple was driving much of this as they have always had proprietary systems), and using software drivers to support most hardware had become the norm. Naturally, the 3D accelerators used drivers as well, but we were nearly back to that SVGA wild west again; almost every hardware manufacturer was trying to introduce their own driver API as “the standard” for 3D graphics on PC, naturally favouring their own hardware’s design. On the actual cards, data still had to be written to specific addresses in specific formats, but the manufacturers had recognized the need for a software abstraction layer.

    OpenGL on PC evolved from an effort to create a unified API for professional graphics workstations. PC hardware manufacturers eventually settled on OpenGL as a standard which their drivers would support. At around the same time, Microsoft had seen the writing on the wall with regards to games in Windows (they sucked), and had started working on the “WinG” graphics API back in Windows.3.1, and after a time that became DirectX. Originally, DirectX only supported 2D video operations, but Microsoft worked with hardware manufacturers to add 3D acceleration support.

    So we still had a bunch of different hardware designs, but they still had a lot of fundamental similarities. That allowed for a standard API that could easily translate for all of them. And this is how the hardware and APIs have continued to evolve hand-in-hand. From fixed pipelines in early OpenGL/DirectX, to less-dedicated hardware units in later versions, to the extremely generalized parallel hardware that caused the introduction of Vulkan, Metal, and the latest DirectX versions.

    To sum up, all of these graphics APIs represent a standard “language” for software to use when talking to graphics drivers, which then translate those API calls into the correctly-formatted writes and reads that actually make the graphics hardware jump. That’s why we sometimes have issues when a manufacturer’s drivers don’t implement the API correctly, or the API specification turns out to have a point which isn’t defined clearly enough and some drivers interpret it one way, while other drivers interpret the same API call slightly differently.