“Better” in the sense that it actually has the ability to check for corruption at all, as all metadata and data are checksummed.
In Switzerland, at least where I am, people normally use rakes on their yard and they send a street sweeper along every road/sidewalk at least once a week.
I’ve spent the last year or so trying to get myself to not jump out of my pants every time I see one of these things. Then roughly a month ago, I get up at 2am to take a piss, and as I’m opening the door this little fucker falls down no more than 10cm away from my face, lands right on my foot and would have probably run up my pyjamas if I hadn’t managed to reflexively kick and launch it flying across the room. I think it must have been either sitting on top of the door or been trying to climb through the doorframe just as I opened it. Pretty sure I woke up everyone in the house, and probably the neighbors too.
Anyway, my phobia of these little shits is now 100 times worse than it was.
Okay, but the commenter said “my laptop with jts integrated GPU”. Obviously, laptops with a dedicated AMD GPU would be affected by this change.
It specifically says the change only applies to dedicated GPUs, not integrated ones.
I had a double root canal a few months ago, no anesthesia, and literally couldn’t feel anything. The nerves on both teeth were already completely dead, there was simply no sensation at all.
Paint.NET is the only Windows-only software I really miss. The closest replacement I’ve found is Pinta, but the interface is a lot clunkier and it hangs/breaks often.
They probably mean EC code? That said, you can use checksums to “correct” errors if you have redundant copies of the data (by reading from the other copy if one copy has a bad checksum)
Chiming in to say that I’ve also experienced this on systems with an unresponsive NFS mount, although in that case it hangs until the connection is restored or the network operation times out.
Yes, which is why these settings can also be configured per-directory as well as per-file.
It’s not that obscure - I had a use case a while back where I had multiple rocksdb instances running on the same machine and wanted each of them to store their WAL only on SSD storage with compression and have the main tables be stored uncompressed on an HDD array with write-through SSD cache (ideally using the same set of SSDs for cost). I eventually did it, but it required partitioning the SSDs in half, using one half for a bcache (not bcachefs) in front of the HDDs and then using the other half of the SSDs to create a compressed filesystem which I then created subdirectories on and bind mounted each into the corresponding rocksdb database.
Yes, it works, but it’s also ugly as sin and the SSD allocation between the cache and the WAL storage is also fixed (I’d like to use as much space as possible for caching). This would be just a few simple commands using bcachefs, and would also be completely transparent once configured (no messing around with dozens of fstab entries or bind mounts).
ext4 aims to not lose data under the assumption that the single underlying drive is reliable. btrfs/bcachefs/ZFS assume that one/many of the perhaps dozens of underlying drives could fail entirely or start returning garbage at any time, and try to ensure that the bad drive can be kicked out and replaced without losing any data or interrupting the system. They’re both aiming for stability, but stability requirements are much different at scale than a “dumb” filesystem can offer, because once you have enough drives one of them WILL fail and ext4 cannot save you in that situation.
Complaining that datacenter-grade filesystems are unreliable when using them in your home computer is like removing all but one of the engines from a 747 and then complaining that it’s prone to crashing. Of course it is, because it was designed under the assumption that there would be redundancy.
XFS still isn’t a multi-device filesystem, though… of course you can run it on top of mdraid/LVM, but that still doesn’t come close to the flexibility of what these specialized filesystems can do. Being able to simply run btrfs device add /dev/sdx1 /
and immediately having the new space available is far less hassle than adding a device to an md array, then resizing the partition and then resizing the filesystem (and removing a device is even worse). Snapshots are a similar deal - sure, LVM can let you snapshot your entire virtual block device, but your snapshots are block devices themselves which need to be explicitly mounted, while in btrfs/bcachefs a snapshot is just a directory, and can be isolated to a specific subvolume rather than the entire block device.
Data checksums are also substantially less useful when the filesystem can’t address the underlying devices individually, because it makes repairing the data from a replica impossible. If you have a file on an md RAID1 device and one of the replicas has a bad block, you might be able to detect the bitrot by verifying the checksum, but you can’t actually fix it, because even though there is a second copy of the data on another drive, mdadm simply exposes a simple block device and doesn’t provide any way to read from “the other copy”. mdraid can recover from total drive failure, but not data corruption.
ext4 is intended for a completely different use case, though? bcachefs is competing with btrfs and ZFS in big storage arrays spanning multiple drives, probably with SSD cache. ext4 is a nice filesystem for client devices, but doesn’t support some things which are kinda fundamental at larger scales like data checksumming, snapshots, or transparent compression.
bcachefs is way more flexible than btrfs on multi-device filesystems. You can group storage devices together based on performance/capacity/whatever else, and then do funky things like assigning a group of SSDs as a write-through/write-back cache for a bigger array of HDDs. You can also configure a ton of properties for individual files or directories, including the cache+main storage group, amount of data replicas, compression type, and quite a bit more.
So you could have two files in the same folder, one of them stored compressed on an array of HDDs in RAID10 and the other one stored on a different array of HDDs uncompressed in RAID5 with a write-back SSD cache, and wouldn’t have to fiddle around with multiple filesystems and bind mounts - everything can be configured by simply setting xattr values. You could even have a third file which is striped across both groups of HDDs without having to partition them up.