• 35 Posts
  • 1.83K Comments
Joined 6 years ago
cake
Cake day: May 31st, 2020

help-circle


  • We deployed a client software in a Docker container on Windows 10. It could not connect to the backend, even though we saw SYN packages originating from it.
    So, we ran WireShark on the Windows host and saw that the SYN-ACK packages from the backend were arriving there, too, but no ACK came through to complete the TCP handshake.

    Eventually, we rolled out a network debugging container on that Windows host and then could see in the tcpdump, that the SYN-ACK packages, which arrived on the Windows host, just did not show up in the container. Hyper-V or something was quietly dropping them.

    Other network connections were working fine, just the SYN-ACK from our backend triggered this.





  • Yeah, we always try to automate as much as possible with generic language build tooling and scripts, so that ideally the call in the runner is just a single command, which can also be triggered locally.

    Unfortunately, if you want to be able to re-run intermediate steps, then you do need to inform the runner of what you’re doing and deal with the whole complexity of up-/downloading intermediate results.





  • I feel like this isn’t really a new development. Back when LAN parties and local multiplayer were still a thing, games like TeeWorlds, Worms etc. were popular, because they ran on potatoes and you could often get them for free.

    The actual fun then came from dicking around with or competing against your friends. The game itself does not need to be ground-breaking for that.

    Hell, it technically started even earlier than that, with physical card games and board games and such. Just play them with friends and it’s fun.




  • I also always find that outsourcing is risky, whether it’s to other devs or to some AI, because it requires that you understand the problem in whole upfront. In 99% of cases, when I’m implementing something myself, I will run into some edge case I had not considered before and where an important decision has to be made. And well, a junior or LLM is unlikely to see all these edge cases and to make larger decisions, that might affect the whole codebase.

    I can try to spend more time upfront to come up with all these corner cases without starting on the implementation, but that quickly stops being economic, because it takes me more time than when I can look at the code.




  • I mean, I don’t have a ton of skin in the game here, as I don’t care much for horror games either way.
    But yeah, I just assume that they say they’re cautious to calm the fans, but they actually can’t be cautious, since well, they can only really delay by a whole year at a time, and if they do that, then they have two games in the year afterwards.

    They did only pre-plan a handful of years, so maybe they can just delay the following games by a year each, too.

    But yeah, it still just sounds like the decision-making here isn’t driven by logic or what allows publishing good games, but rather by
    Mr. Krabs meme, where he says "Hello, I like money!".




  • In my corner of the embedded world, it feels like everyone is practically jumping to integrate Rust. In the sense that vendors which haven’t had to innovate for 10+ years will suddenly publish a Rust API out of the blue. And I’m saying “out of the blue”, but I do also regularly hear from other devs, that they’ve been pestering the vendors to provide a Rust API or even started writing own wrappers for their C APIs.

    And while it’s certainly a factor that Rust is good, in my experience they generally just want to get away from C. Even our management is well aware that C is a liability.

    I guess, I should add that while I say “jumping”, this is the embedded world where everything moves extremely slowly, so we’re talking about a multi-year jump. In our field, you need to get certifications for your toolchain and code quality, for example, so lots of work is necessary to formalize all of that.