• Knusper@feddit.de
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    1 year ago

    I found debuggers practically unusable with asynchronous code. If you’ve got a timeout in there, it will break, when you arrive at a breakpoint.

    Theoretically, this could be solved by ‘pausing’ the clock that does the timeouts, but that’s not trivial.
    At least, I haven’t seen it solved anywhere yet.

    I mean, I’m still hoping, I’m wrong about the above, but at this point, I find it kind of ridiculous that debuggers are so popular to begin with.
    Because it implies that synchronous code or asynchronous code without timeouts are still quite popular.

    • bort@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Because it implies that synchronous code […] [is] still quite popular.

      it isn’t?

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I’m sure it is, I’m just not terribly happy about that fact.

        Thing is, any code you write ultimately starts via input and funnels into output. Those two ends have to be asynchronous, because IO fundamentally is.
        That means, if at any point between I and O, you want to write synchronous code, then you have to block execution of that synchronous code while output is happening. And if you’re not at least spawning a new thread per input, then you may even block your ability to handle new input.

        That can be fine, if your program has only one job to do at a time. But as soon as it needs to do a second job, that blocking becomes a problem and you’ll need to refactor lots of things to become asynchronous code.
        If you just build it as asynchronous from the start, it’s significantly less painful.

        But yeah, I guess, it’s the usual case of synchronous code being fine for small programs, so tons of programmers never learn to feel comfortable with asynchronous code…