- cross-posted to:
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.ml
Article on the new wave of AI-generated bug reports, and how patches are quickly turned into exploits with automation assistance.
There are really plenty of them, including in commercial software - Firefox has for April twenty times more security bugs reported than normal.
I can’t tell how dramatic this is really. Maybe this is being cooked a tad hotter than it is eaten. Some reports on AI capabilities are basically clever marketing - or even outright misleading.
What is clear is that distros will need to fix more bugs, and it will take some time until most uncovered bugs are fixed.
Users will need to update more frequently.
Frugal configurations might become even more attractive.
Who is in for a bad time are probably vendors and users of “connected” devices which were never designed to be updated. Every Smart TV, Amazon Echo, “Smart” home device, or “Smart” toothbrush will likely become open to black hats or enemies of peace and democracy which invade your home network. Including medical stuff…
Some devices should probablybe put in a Farady cage - say anything that would be able to start a fire.


One real concern I have is that there are now automated tools that can read a patch, and the maintainer’s release notes with a description of a security vulnerability fixed by that patch, and then create a working exploit of the pre-patch vulnerability.
In that particular moment, you know that a vulnerability exists and that it was serious enough to be described in release notes, and you can compare two code versions, one that is secure and one that is not. From there, any AI coding agent is working towards something that definitely exists, with a bunch of description of what it might be.
So that means that the window between when a patch is released and when users actually apply that patch is going to be more important than ever. Downstream maintainers will be under a lot of time pressure to implement changes from upstream, because every new security patch will create a race to create 1-day exploits for everyone using that software.
Open source is going to need to move slower I think. They won’t be able to take advantage of ai to speed up development because there is a bigger risk for pushing a bad release.