Linux is a branch of development of the old unix class of systems. Unix is not necessarily open and free. FOSS is what is classified as open and free software. Unix since its inception was deeply linked to specific industrial private interests, let’s not forget all this while we examine the use of linux by left minded activists. FOSS is nice and cool, but it is nearly 99.99% run on non-open and non-free hardware. A-political proposals of crowd-funding and diy construction attempts have led to ultra-expensive idealist solutions reserved for the very few and the eccentric affluent experimenters
Linux vs Windows is cool and trendy, is it? Really is it alone containing any political content? If there is such what is it? So let’s examine it from the base.
FOSS, People, as small teams or individuals “producing as much as they can and want” offering what they produced to be shared, used, and modified by anyone, or “as much as they need”. This is as much of a communist system of production and consumption as we have experienced in the entirety of modern history. No exchange what so ever, collective production according to ability and collective consumption according to need.
BUT we have corporations, some of them mega-corps, multinationals who nearly monopolize sectors of computing markets, creating R&D departments specifically to produce and offer open and free code (or conditionally free). Why? Firstly because other idiots will join their projects and offer further development (labor), contribute to their projects, for “free”, but they still retain the leadership and ownership of the project. Somehow, using their code, without asking why they were willing to offer it in the first place, it is cool to use it as long as we can say we are anti/against/ms-win free.
Like false class consciousness we have fan-boys of IBM, Google, Facebook, Oracle, Qt, HP, Intel, AMD, … products against MS.
Back when unix would only run on enterprise ultra-expensive large scale systems and expensive workstations (remember Dec, Sun, Sgi, … workstations that were priced similarly to 2 brand new fast sportscars each) and the PC market was restricted to MS or the alternative Apple crap, people tried and tried to port forms of unix into a PC. Some really gifted hacking experts were able to achieve such marvels, but it was so specific to hardware that the examples couldn’t be generalized and utilized massively.
Suddenly this genious Finn and his friends devised a kernel that could make most PC hardware available work and unix with a linux kernel could boot and run.
IBM saw eventually a way back into the PC market it lost by handing dos out to the subcontractors (MS), and saw an opportunity to take over and steer this “project” by promoting RedHat. After 2 decades of behind the scenes guidance since the projected outcome was successful in cornering the market, IBM appeared to have bought RH.
Are we all still anti-MS and pro-IBM,google,Oracle,FB,Intel/AMD?
The bait thrown to dumb fish was an automated desktop that looked and behaved just like the latest MS-win edition.
What is the resistance?
Linus Trovalds and a few others who sign the kernel today make 6figure salaries ALL paid by a handful of computing giants that by offering millions to the foundation control what it does. Traps like rust, telemetry, … and other “options” are shoved daily into the kernel to satisfy the paying clients’ demands and wishes.
And we, in the left are fans of a multimilioner’s “team” against a “trilioner’s” team. This is not football or cricket, or F1. This is your data in the hands of multinationals and their fellow customer/agencies. Don’t forget which welfare system maintains the hierarchy of those industries whether the market is rosy or gray. Do I need to spell out the connection?
Beware of multinationals bearing gifts.
Yes there are healthier alternatives requiring a little more work and study to employ, the quick and easy has a “cost” even when it is FOSS.
.
- Unix is not linux
- All of this does not have anything to do with windows. What is this both-sides-bad liberalism? Windows is clearly so much worse at all of this it isn’t even worth talking about in this context.
- It’s not the fault of FOSS devs that many live in bourgeois dictatorships. Of course the bourgeioise will steal the code. Of course they will subvert the license.
- Are you expecting FOSS devs to somehow conjure their own chip fabs? That’s not how material conditions work.
I don’t really get the point of this post. If you want to say that quite a lot of FOSS code is funded by huge corporations, then yeah, sure. Most people I would assume know that. But not really sure what that has to do with title, even if Linux is mostly run by corporations it is still much better than alternatives.
Also, not really sure what you mean by traps like Rust and telemetry. There is no telemetry on Linux and the only reason why I can think of you included it is recent Go telemetry, which I don’t really get how it is relevant. With Rust, I also don’t get it, Rust wasn’t added because some company wanted it or whatever, it was added because it is a popular (and extremely loved) language that is suitable for kernel development. Not many people nowadays want to code in C.
https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config
This is a pretty vanilla standard config file with which you compile the kernel, 6.3 in the above example. Search for words as telemetry, rust, IFS … tell me what linux you use without it.
The fact that it has word telemetry in it doesn’t mean it spies on you. CONFIG_WILCO_EC_TELEMETRY -> allows you to read telemetry from some chrome specific hardware CONFIG_INTEL_PMT_TELEMETRY -> allows you to access telemetry that intel platform monitoring provides CONFIG_INTEL_TELEMETRY -> allows you to configure telemetry and query some events from intel hardware.
None of these options spy on you or do anything nefarious. It just means that you can have an application that queries some data from them, nothing more.
Again, not sure what your issue with Rust is.
And with IFS it is same as above, someone here already linked you an article on it.
And that is exactly the point. Working with many servers means that you have to collect data. How am I supposed to know when it’s time to replace something and so on? I remember my boss not wanting to spend money on Nagios (servers etc.) until one day everything blew up. No one could work for two days. After that the idiot finally spent money on a monitoring system and you could finally see when the RAID failed.
Exactly if something I want more telemetry in my system that is more easily accessible. I can’t imagine living for example without SMART.
linux and unix were built on alternatives. If you don’t like a piece of code offered as a tool to do something you write something better and offer it/share it with others. So you as a user have a choice among similar tools. Even the most basic ones like gnu-utilities have busybox and other specific alternatives.
The latest trend is to have NO-ALTERNATIVES, to get everyone to use 1 core system. So instead of diverging as a system (as some of the BSD-unix projects did) linux is showing a tendency to converge into one system (fedora,debian,arch) with little differences among them.
You get corporate media publishing articles of the “top -ten” linux distributions, or “top-ten” desktops, all based on the very same edition of IBM software, no exception, as there is none. This is marketing and steering the public into a single direction. The question you should answer to yourself is why! Without somone spelling it out to you drawing the attention of 3 lettered agencies.
That just depends on what you use. There are loads of distros that allow you to use whatever you want. There are only so many ways you can do stuff, and it doesn’t make much sense to differentiate if you don’t have reason to. You have some genuinely diverging distros like NixOS that are significantly different.
Not really sure what corporate media you read. In my experience, most of those are just a popularity contest. And usually there are non-corporate distros like arch, Debian, etc. And with desktops I mean I am not even sure there are ten desktop environments (at least with some reasonable amount of users).
okay i can totally see why you wouldn’t like linux as a whole becoming “one thing”, but what is your opinion on the growth of linux on the desktop? By far the biggest factor in my opinion that’s pushing people away (consumers as well as devs) is having to deal with so many different distros, packaging apps with different libraries on so many different systems. Having standards that aim to reduce that load can only be beneficial for the masses to adopt an objectively better operating system, even if not perfect, wouldn’t it ? i.e. the rise of appimages and flatpaks as a means to curb that issue is to me a good thing, even if not “the most optimal way of doing things”
I always actually wonder if that is an actual issue. Apart from some duplicate effort with things like packaging for different distros (which is something that distro maintainers do anyway) I don’t really get this point. For me, this only makes sense for proprietary packages and not for open source.
Apart from some small differences in how you install packages, using most distros is basically the same.
I am always confused by this point because I see it repeated everywhere, but never with a good argument supporting it.
I only ever see people who work on proprietary software make this argument. For FOSS this is a non-issue. If you have the source code available you can just compile it against the libs on your system and it will just work In most cases unless there was a major change in some lib’s API. And even then you can make some adjustments yourself to make it work. Distro maintainers tend to do this.
For many admittedly smaller apps, it’s always a bit of a pain to have to install it manually because the dev simply gave up trying to package it for “the big 3” and distro maintainers can’t care about all small programs, although the current system works well enough for most programs.
However i am not a developer, so i can’t speak firsthand about the difficulty of packaging and maintaining your app on different distros across years, and i’m not sure if the brunt of maintaining all these apps should fall onto distro maintainers.
About users and using distros, i can agree that it’s roughly the same either way with the only real difference most of the time being “do you use apt or pacman to install packages”
Fair enough, but I only see that for some niche projects. And at that point you are probably not a regular user and can do it yourself.
There is an issue on the other side, if you only provide appimage/flatpak it is much less customizable. You can’t optimize your software for your CPU, you can’t mix and match what version of the libraries your software uses. Personally, I think it is always a good idea to provide a flatpak alternative for those that want it, but I don’t see it as a replacement for regular packaging.
Edit: I would much rather see something like nix being used to describe the dependencies. That is in my opinion the best solution, which also allows you to more easily port it to other systems.
Ideally, it’d be good enough to simply have say, an appimage/flatpak and have the source code and then let distro maintainers/end users build it how they want/need to, i have had the pleasure of trying to get NVENC working in OBS under Debian 10 and that was a massive pain, due to both outdated nvidia-drivers, i had to recompile ffmpeg with the right flags and that would break after every update, the easiest way was to get an OBS flatpak that came prebuilt with it all IIRC I guess my problems with that were mainly because i used debian stable at that time, it’s probably not as much of a pain now that i’m on sid.
I don’t know anything about Nix, i heard a lot of good about it and how it’s “all config files” or something but the prospect of learning a whole new world scares me, but i trust your judgment on that. I’ll stick to what i know on my boring ass debian sid :D
I would imagine that if you weren’t on Debian stable, it would be much better. From what I’ve seen, dealing with anything Nvidia on stable distros is pain.
I just recently started working with it and it is really nice. You have NixOS, where you can define basically everything with just nix config files. You want to run MPD on some port, sure just use add this option, and we will create config file and put it in right place. It is really easy to define your entire system with all the options in one place. I don’t think I’ve ever had to change anything in /etc I just need to change an option in my system config. I think something like this is probably the future of Linux.
Nix by itself is just a language that is used to configure things. You can do things like to define all the dependencies for your project with it, so it is easy to build by anyone with nix (which you can install basically anywhere). By doing it like this you can be sure all the dependencies are defined, so it is really easy to port the software to other distros even if you weren’t using Nix.
Use Redox os. Completely community run. I recommend this because I see how much you love rust.
what kind of telemetry is pushed into the linux kernel, exactly?
wilco intel and possibly hidden amd There is also this INTEL IFS which is pushed as “good telemetry” or telemetry you want, as a super -enterprise admin to know when to replace equipment.
https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config
Many of those things didn’t exist in pre-6 editions, they have crawled up dew to pressure by manufacturers. The current 6.xx kernels are more than double of what 5.10-lts was and nearly double of 5.15-lts … Much of the firmware included is not even under production but alpha/beta versions of hardware under testing by manufacturers.
What do users commonly do? Seek to have the latest and newest published, without reading release and changelogs ever. “Continuous development and modern equipment and code are always better.”
Critical abilities are characteristics of “toxic personalities”, another capitalist buzz-word incorporated “not-critically” by the masses.
I dont really understand your point. What is so bad about those telemetry drivers? Thay have to be loaded and there is no use for them for simple users.
when telemetry is enabled it is not the user utilizing it but a manufacturer drawing data from the user’s machine.
Genuine question then; do the distros using these kernels disable these telemetry upon installing them, should you tick the “no telemetry pls” options during the installation process ?
What is so bad about allowing a large corporation to voluntarily draw data out of your system? For one it is very much against the fundamentals and principles unix and foss were based. One the earlier days selling point for FOSS was to assure the user there was no “telemetry”. One the other extreme the public through android and mac/os have been conditioned to allow telemetry pretty much with every application they ever installed.
Couldn’t you just as well recompile your own kernel without these telemetry issues, with Gentoo for example ? The fact that you can do this and can’t at all with Windows is a pretty big factor to me.
For sure FOSS is night/day improvement over closed non-free binary blobs, for all we know Win 11 may be linux in drag to look like windows. But the anti-MS-win identity is too short sighted for people on the left at least. How often do you see similar behaviors among linuxers be for/against intel/amd, when in the recent and more distant past, they have both be caught red-handed from forcing backdoor systems into the market discovered long after and silenced by the corpororate press. One of them a few years ago was shortly adopted into the linux kernel before it was flushed. speck is one I readily remember. It was nsa code google suggested it is added to the kernel.
I would say that especially leftist Linux users are against every corporation. But there are reasons why Linux users prefer AMD to Nvidia. They are more FOSS friendly. If you want to argue for free hardware, I 100% agree, but unfortunately that is basically impossible nowadays.
If you want to be alarmed about what your computer is doing, I would much more worry about things like Intel management engine and AMDs version of that. Or the binary-blobs that you load just to use these devices. Those are actual issues that we should be focused on.
It is beeing used in datacenters. You load the module, you can aggregate your dara and then visualise it.
This as an example. I couldnt find anything how something is beeing sendet to Intel.
This is kind of telemetry which is usefull: The collected monitoring data is exposed to user-space via a new XML format for interested tools to parse.
And this for those wilco stuff. Also look here. I could not found anything, that it is going to be send to a corporation.
What is wrong if i have a bunch of servers and I want to collect telemetry data for them? I can collect them whereever I want.
if you can collect them others may be able to as well, and if there is a way to collect one thing this is a vehicle to access other things. As Snowden says, before his activism it was conspiracy theory, but the world “did change” since then. Or did it?
No they cant. What are you talking about? Its like an agent writing specific things to /var/log. Telemetry means, that data is collected for your usage. Not for corporations. Intel PMT gives you the ability to access data. You can collect them and do stuff with them.
Have ever seen software which helps you to see how your assets are doing? Look at Nagios. This thing collects data from your assets and visualise it. And this is a kind of telemetry. With Nagios you can see when its time to replace your harddisks on a server because the SMART values are bad. This is all telemetry. And here you have a possibility in a driver to access certain stuff. No stupid workarounds, direct access. Access which is under your control. Have you ever seen a datacenter or worked somewhere, where you have to manage a bunch of servers? You can check every instance one by one or simply collect data and see whats going on.
You are simply misunderstanding the word “telemetry”.
First of all, for people on the left as the community states, the use of running datacenters and enterprise networks is minor and rare (unless you are an admin of the party or federation of unions headquarters). This means the machine has the ability to serve data to others, to the network, and to the admin collecting it. Telemetry is a way for machines to passively allow another to collect data. Any chance this can be exploited? Why have it if your intention is a sole user/admin of a single machine?
With the complexities of a self regulated system as systemd such abilities can’t be controlled or audited by a user, but look at what most users of linux have. The collaboration of all those subsystems doing such things are expanding the surface of a machine’s presence on any network to be exploited.
For non-industrial use no telemetry is needed or should be allowed. But you pick up on a detail of what the original post is aiming to state to discredit it on a technicality that is meaningless. There are hundreds of parts of a linux system where such discussion can be exploited.
The point is DO NOT let your anti-windows rhetoric blind and confuse users that this is an easy and safe alternative that provides security, privacy, and other goodies, when 99% choose windows that is just as automated and “user friendly” as windows.
You tell me if your average linux user (especially those using gnome and plasma) know where, how, and why to disable kernel modules. Whether those modules are optionally disabled, enabled, included in the kernel, or awaiting someone to trigger them. Look at forums and boards, people mess up their boot-loader or fstab and their ms-win reaction is to format the disk and reinstall something like ubuntu.
I mean, sure, but that is true for literally every single info on your computer. If you can read data from these modules, you can read data from anything else. You can read entire memory, query your file system, do basically anything you want. At that point, the issue of whether someone can query the capabilities of your intel CPU is not something I would worry about.
The existence of enterprise contributors to Linux is symbiotic with volunteer devs and helps drive development. There are benefits to having full time talented devs and engineers paid for their time working on Linux, and for the most part, the whole community is better for it.
rust, telemetry
is there telemetry in the kernel?
why is rust a trap?
There is no telemetry in the sense, that something is sendes to Intel. Look here. And this is quite hand, if you use Linux in a datacenter
not on 5.10 but most 6.xx kernel do have telemetry and very few distros disable it, if possible.
Show me the code, where something is beeing send to Intel. Not, that a module is loaded. Telemetry has also the meaning, that you collected data of your assets in a datacenter. I couldnt find anything in the code of Intel-PMT.
rust and maybe go, in a way evade what open and free code really meant (which contains the characteristic of being self contained). Many rust written software demand to the minute release of dependencies, automatically drawn and utilized while you compile the piece of software. First there is no way you can audit this then at any given moment this drawn code can change affecting what you compiled, exponentially making it difficult to audit and certify as secure. It also transfers the responsibility to 2nd and 3rd parties of what the code contains, making it legally impossible for being responsible or being accused of creating back-doors and other weaknesses in software.
But it is modern and it is being pushed everywhere. In general, when you hear buzz words and terms, and technologies, making noise and be utilized everywhere be ware of the trojan.
Facebook which had contributed 0 to the FOSS community, suddenly released zstd which they bought from someone (or so they say) and made him rich. This FOSS within months was incorporated and utilized all across the linux community on very false data supporting its superiority, like publishing comparative compression/decompression numbers of multi-thread software vs a mandated single thread on the competitor. At the end nobody really even used this optimized condition under which zstd has a tiny superiority in speed while still lacking in space (compression/decompression software).
Someone and something drives this “rush”, like gold in Columbia river advertised by tool merchants for gold diggers.
At least on the left we should have a bit more critical tendency than anti-windows fan boys clubs. The price you pay to have a usb stick automounted rw as a user automatically upon insertion is one of security and privacy. All this overhead instead of 5lines of script.
Most of the code written nowadays isn’t self-contained. And basically it is impossible to do so. I mean, I guess you have some exceptions like the Linux Kernel itself and some low level utilities, but you use libraries and others people code everywhere. In that way, Rust is much better than most other options because it at least lets you pin your dependencies really easily. The idea that everyone who uses some code is auditing it is just ridiculous. You should be able to sure, and in some cases it might be a good idea to do so, at least for parts of your code. But if you are using Linux, did you audit the entire Linux source code? What about C standard libraries. Even just that would take a ridiculous amount of time.
I would also argue that rust isn’t pushed everywhere, people just like it because it is a wonderful language. There are much more people who use it in their own projects than do it professionally for example.
I could understand your argument if it was based on how Rust is run, what licenses it uses etc. But this is rather baffling to me. Basically the only thing you mention is the issue of statically linked vs. dynamically linked.
With zstd again not really sure what you are even trying to say. That Facebook had impact on what is used? Ok, so? Zstd is completely open source and if someone decided to use it, that is up to them. I am pretty sure that every software I used that uses zstd also let me use another compression algorithms. And from what I found zstd in some cases is superior to alternatives, but feel free to provide sources, I am sure that I could be incorrect.
yes, you always have some dependencies, even in the lowest form of linux utilities, there is a c library usually (glibc or musl) but the dependencies needed you choose and provide and are specific. Here we have a dynamic process that draws (not always but sometimes) the latest commit from someone’s git as a dependency, and a minute later I try to build the same, someone pushes a commit replacing the previous change, and my package builds as well. The two results are not identical, one may contain a backdoor, and we didn’t even notice a difference.
When you build from glibc 2.3.4 and I build from the same, it IS the same.
Who says the distribution of glibc 2.3.4 you and I have are the same? It only depends on where you got it from. And even then we can build it with different flags etc. Not really sure how rust is worse in that one. On the contrary, usually when you build software in C/C++ you dynamically link. So you have no idea what version of libraries someone is using or where they got it. In that sense, Rust’s approach is actually safer.
I’m anti windows because they are structural weaknesses.