We made an unrelated change that caused a similar, longer availability incident two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.
It seems that the method they have of specifically propagating new security configurations to their servers is not a gradual or group-based rollout, it pushes certain changes to all servers at once, so uncaught bugs end up hitting everything instead of just some initial test group.
In particular, the projects outlined below should help contain the impact of these kinds of changes:
Enhanced Rollouts & Versioning: Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.
“Fail-Open” Error Handling: As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.
This is the actual answer with respect to Cloudflare. Their config system was fucked in November. It’s still fucked in December. React’s massive CVE just forced them to use it again.
More generally, the issue is a matter of companies forcefully accelerating feature development at the cost of stability, likely due to AI. This is how the company I’m at is like anyway.
Something they (Cloudflare) said recently about the last big outage is that there is some bug in some part of their system that isn’t their own code/product and the developer of that thing isn’t fixing the bug.
Without looking into this specific outage, I’d suggest things like deferred maintenance and “cost optimizing” technical staffing are often contributing factors. (At least in my experience)
Is there a reason these outages seem to have increased recently?
From the blog post OP linked in a comment:
It seems that the method they have of specifically propagating new security configurations to their servers is not a gradual or group-based rollout, it pushes certain changes to all servers at once, so uncaught bugs end up hitting everything instead of just some initial test group.
This is the actual answer with respect to Cloudflare. Their config system was fucked in November. It’s still fucked in December. React’s massive CVE just forced them to use it again.
More generally, the issue is a matter of companies forcefully accelerating feature development at the cost of stability, likely due to AI. This is how the company I’m at is like anyway.
Lack of NSA funding to run their man in the middle platform that everyone likes.
Something they (Cloudflare) said recently about the last big outage is that there is some bug in some part of their system that isn’t their own code/product and the developer of that thing isn’t fixing the bug.
Interesting! Thanks for the information.
Without looking into this specific outage, I’d suggest things like deferred maintenance and “cost optimizing” technical staffing are often contributing factors. (At least in my experience)