Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 763 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle



  • It’s just not that good of a metric overall. Not just because it would be easy to fake it, but also because it would inevitably divide into tribes that unconditionally upvote eachother. See: politics in western countries.

    You can pile up a ton of reputation and still be an asshole and still get a ton of support from like-minded people.

    The best measure of someone’s reputation is a quick glance and their post history.


  • I think it is a circular problem.

    Another example that comes to mind: the sanctions on Huawei and whether Google would be considered to be supplying software because Android is open-source. At the very least any contributions from Huawei is unlikely to be accepted into AOSP. The EU is also becoming problematic with their whole software origin and quality certifications they’re trying to impose.

    This leads to exactly what you said: national forks. In Huawei’s case that’s HarmonyOS.

    I think we need to get back to being anonymous online, as if you’re anonymous nobody knows where you’re from and your contributions should be based solely on its merit. The legal framework just isn’t set up for an environment like the Internet that severely blurs the lines between borders and no clear “this company is supplying this company in the enemy country”.

    Governments can’t control it, and they really hate it.


  • The problem isn’t even where the software is officially based, it can become a problem for individual contributors too.

    PGP for example used to be problematic because US exports control on encryption used to forbid exporting systems capable of strong encryption because the US wanted to be able to break it when it’s used by others. Sending the tarball of the PGP software by an american to the soviets at the time would have been considered treason against the US, let alone letting them contribute to it. Heck, sharing 3D printable gun models with a foreign country can probably be considered supplying weapons like they’re real guns. So even if Linux was based in a more neutral country not subject to US sanctions, the sanctions would make it illegal to use or contribute to it anyway.

    As much as we’d love to believe in the FOSS utopia that transcends nationality, the reality is we all live in real countries with laws that restrict what we can do. Ultimately the Linux maintainers had to do what’s best for the majority of the community, which mostly lives in NATO countries honoring the sanctions against Russia and China.


  • Max-P@lemmy.max-p.metoPiracy@lemmy.mlAI for torrenting?
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    No. It could repair some files to make them playable, maybe, by extrapolating sections before and after, like a couple seconds missing there and there in a movie, but all bets are off as to whether it’ll guess right. I’m not aware of such tool existing.

    But if it’s a zip file, there’s no chance it can fix it. It’s much different than AI upscaling, because you don’t just need to find an answer that’s close enough, you need the exact bits because even one value off could mean the gravity of the whole game is off, as an example. If some files are encrypted then all bets are off, as that would imply breaking encryption.

    Also I’d look at what’s the missing data. Sometimes you can be stuck at 99% because the only seeder left didn’t download a readme file or something but the whole content is there.



  • Those kinds of problems aren’t particularly new (PGP comes to mind as an example back when you couldn’t export it out of the US), but it’s a reminder that a lot of open-source comes from the US and Europe and is subject to western nation’s will. The US is also apparently thinks China is “stealing” RISC-V.

    To me that goes against the spirit of open-source, where where you come from and who you are shouldn’t matter, because the code is by the people for the people and no money is exchanged. It’s already out there in the open, it’s not like it will stop the enemy from using the code. What’s also silly about this is if the those people were contributing anonymously under a fake or generic name, nothing would have happened.

    The Internet got ruined when Facebook normalized/enforced using your real identity online.



  • Everyone’s approaching this from the privacy aspect, but the real reason isn’t that the cashier thought you were weird, they’re just underpaid and under a lot of pressure from management to try multiple times and in some cases they even get written up for not doing it because it’s deemed part of their job. They hate it just as much as you. Same when you try to cancel your cable subscription or whatever: the calls are recorded and their performance is monitored and they make damn sure they try at least 3 times to upsell you, even when it’s painfully obvious you’re done with them.

    Just politely decline until they asked however many times they’re required to ask and move on.


  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.


  • The sandboxing is almost always better because it’s an extra layer.

    Even if you gain root inside the container, you’re not necessarily even root on the host. So you have to exploit some software that has a known vulnerable library, trigger that in that single application that uses this particular library version, root or escape the container, and then root the host too.

    The most likely outcome is it messes up your home folder and anything your user have access to, but more likely less.

    Also, something with a known vulnerability doesn’t mean it’s triggerable. If you use say, a zip library and only use it to decompress your own assets, then it doesn’t matter what bugs it has, it will only ever decompress that one known good zip file. It’s only a problem if untrusted files gets involved that you can trick the user in causing them to be opened and trigger the exploit.

    It’s not ideal to have outdated dependencies, but the sandboxing helps a lot, and the fact only a few apps have known vulnerable libraries further reduces the attack surface. You start having to chain a lot of exploits to do anything meaningful, and at that point you target those kind of efforts to bigger more valuable targets.



  • Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

    And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.





  • auto rollbacks and easy switching between states.

    That’s the beauty of snapshots, you can boot them. So you just need GRUB to generate the correct menu and you can boot any arbitrary version of your system. On the ZFS side of things there’s zfsbootmenu, but I’m pretty sure I’ve seen it for btrfs too. You don’t even need rsync, you can use ssh $server btrfs send | btrfs recv and it should in theory be faster too (btrfs knows if you only modified one block of a big file).

    and the current r/w system as the part that gets updated.

    That kind of goes against the immutable thing. What I’d do is make a script that mounts a fork of the current snapshot readwrite into a temporary directory, chroot into it, install packages, exit chroot, unmount and then commit those changes as a snapshot. That’s the closest I can think of that’s easy to DIY that’s basically what rpm-ostree install does. It does it differently (daemon that manages hardlinks), but filesystem snapshots basically do the same thing without the extra work.

    However, I think it would be good to use OStree

    I found this, maybe it’ll help: https://ostreedev.github.io/ostree/adapting-existing/

    It looks like the fundamental is the same, temporary directory you run the package manager into and then you commit the changes. So you can probably make it work with Debian if you want to spend the time.


  • All you really have to do for that is mount the partition readonly, and have a designated writable data partition for the rest. That can be as simple as setting it ro in your fstab.

    How you ship updates can take many forms. If you don’t need your distro atomic, you can temporarily remount readwrite, rsync the new version over and make it readonly again. If you want it atomic, there’s the classic A/B scheme (Android, SteamOS), where you just download the image to the inactive partition and then just switch over when it’s ready to boot into. You can also do btrfs/ZFS snapshots, where the current system is forked off a snapshot. On your builder you just make your changes, then take a snapshot, then zfs/btrfs send it as a snapshot to all your other machines and you just boot off that new snapshot (readonly). It’s really not that magic: even Docker, if you dig deep enough, it’s just essentially tarballs being downloaded then extracted each in their own folder, and the layering actually comes from stacking them with overlayfs. What rpm-ostree does, from a quick glance at the docs, is they leverage the immutability and just build a new version of the filesystem using hardlinks and you just switch root to it. If you’ve ever opened an rpm or deb file, it’s just a regular tarball and the contents pretty much maps directly to the filesytem.

    Here’s an Arch package example, but rpm/deb are about the same:

    max-p@desktop /v/c/p/aur> tar -tvf zfs-utils-2.2.6-3-x86_64.pkg.tar.zst 
    -rw-r--r-- root/root    114771 2024-10-13 01:43 .BUILDINFO
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/bash_completion.d/
    -rw-r--r-- root/root     15136 2024-10-13 01:43 etc/bash_completion.d/zfs
    -rw-r--r-- root/root     15136 2024-10-13 01:43 etc/bash_completion.d/zpool
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/default/
    -rw-r--r-- root/root      4392 2024-10-13 01:43 etc/default/zfs
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/zfs/
    -rw-r--r-- root/root       165 2024-10-13 01:43 etc/zfs/vdev_id.conf.alias.example
    -rw-r--r-- root/root       166 2024-10-13 01:43 etc/zfs/vdev_id.conf.multipath.example
    -rw-r--r-- root/root       616 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_direct.example
    -rw-r--r-- root/root       152 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_switch.example
    -rw-r--r-- root/root       254 2024-10-13 01:43 etc/zfs/vdev_id.conf.scsi.example
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/zfs/zed.d/
    ...
    

    It’s beautifully simple. You could for example install ArchLinux without pacman, by mostly just tar -x the individual package files directly to /. All the package manager does is track which file is owned by which package (so it’s easier to remove), and dependency solving so it knows to go pull more stuff or it won’t work, and mirror/download management.

    How you get that set up is all up to you. Packer+Ansible can make you disk images and you can maybe just throw them on a web server and download them and dd them to the inactive partition of an A/B scheme, and that’d be quite distro-agnostic too. You could build the image as a Docker container and export it as a tarball. You can build a chroot. Or a systemd-nspawn instance. You can also just install a VM yourself and set it up to your liking and then just dd the disk image to your computers.

    If you want some information on how SteamOS does it, https://iliana.fyi/blog/build-your-own-steamos-updates/