Around the same time, Cloudflare’s chief technology officer Dane Knecht explained that a latent bug was responsible in an apologetic X post.

“In short, a latent bug in a service underpinning our bot mitigation capability started to crash after a routine configuration change we made. That cascaded into a broad degradation to our network and other services. This was not an attack,” Knecht wrote, referring to a bug that went undetected in testing and has not caused a failure.

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Did they though? Aside from the “every outage is a latent bug” angle, from their postmortem it doesn’t seem to me like they tried to blame it on anything but their failure to contain the spread of (and timely diagnose) the issue

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    8 hours ago

    If you want a technical breakdown that isn’t “lol AI bad”:

    https://blog.cloudflare.com/18-november-2025-outage/

    Basically, a permission change cause an automated query to return more data than was planned for. The query resulted in a configuration file with a large amount of duplicate entries which was pushed to production. The size of the file went over the prealloctaed memory limit for a downstream system which died due to an unhandled error state resulting from the large configuration file. This caused a thread panic leading to the 5xx errors.

    It seems that Crowdstrike isn’t alone this year in the ‘A bad config file nearly kills the Internet’ club.

    • AldinTheMage@ttrpg.network
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 hours ago

      So the actual outage comes down to pre-allocating memory, but not actually having error handling to gracefully fail if that limit is or will be exceeded… Bad day for whoever shows up on the git blame for that function

      • hue2hri19@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        This is the wrong take. Git blame only show who wrote the line. What about the people who reviewed the code?

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Plus the guys who are hired to ensure that systems don’t fail even under inexperienced or malicious employees, management who designs and enforces the whole system, etc… “one guy fucked up and needs to be fired” is just a toxic mentality that doesn’t actually address the chain of conditions that led to the situation

    • groet@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      Yes but no. If you use a different service for the same purpose as you would use cloudflare you will be just as offline if they make a mistake. The difference is just that with a centralized player, everyone is offline at the same time. For the individual websites that does not matter.

        • aeronmelon@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          10 hours ago

          Fun fact time:

          That’s why they’re called computer bugs.

          In 1947, the Harvard Mark II computer was malfunctioning. Engineers eventually found a dead moth wedged between two relay points, causing a short. Removing it fixed the problem. They saved the moth and it’s on display at a museum to this day.

          The moth was not okay.

          And to be fair, the word bug had been used to describe little problems and glitches before that incident, but this was the first case of a computer bug.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            The moth was not okay.

            They didn’t tell us this part when they taught it in school #RIP Bug, the OG bug who died to the OG pull request.

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 hours ago

    a routine configuration change

    Honest question (I don’t work in IT): this sounds like a contradiction or at the very least deliberately placating choice of words. Isn’t a config change the opposite of routine?

    • monkeyslikebananas2@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 hours ago

      Not really. Sometimes there are processes designed where engineers will make a change as a reaction or in preparation for something. They could have easily made a mistake when making a change like that.

      • 123@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 hours ago

        E.g.: companies that advertise on a large sporting event might preemptively scale up (maybe warm up depending on language) their servers in preparation for a large load increase following some ad or mention of a coupon or promo code. Failure to capture the market it could generate would be seen as wasted $$$

        Edit: auto-scale does not count on non essential products, people would not come back if the website failed to load on the first attempt.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        I don’t think it was a bug making the configuration change, I think there was a bug as a result of that change.

        That specific combination of changes may not have been tested, or applied in production for months, and it just happened to happen today when they were needed for the first time since an update some time ago, hence the latent part.

        But they do changes like that routinely.

        • monkeyslikebananas2@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          Yeah, I just read the postmortem. My response was more about the confusion that any configuration change is inherently non-routine.

    • Fushuan [he/him]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      They probably mean that they did a change in a config file that is uploaded in their weekly or bi-weekly change window, and that that file was malformed for whichever reason that made the process that reads it crash. The main process depends on said process, and all the chain failed.

      Things to improve:

      • make the pipeline more resilient, if you have a “bot detection module” that expects a file,and that file is malformed, it shouldn’t crash the whole thing: if the bot detection module crahses, control it, fire an alert but accept the request until fixed.
      • Have a control of updated files to ensure that nothing outside of expected values and form is uploaded: this file does not comply with the expected format, upload fails and prod environment doesn’t crash.
      • Have proper validation of updated config files to ensure that if something is amiss, nothing crashes and the program makes a controlled decision: if file is wrong, instead of crashing the module return an informed value and let the main program decide if keep going or not.

      I’m sure they have several of these and sometimes shit happens, but for something as critical as CloudFlare to not have automated integration tests in a testing environment before anything touches prod is pretty bad.

      • groet@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        it shouldn’t crash the whole thing: if the bot detection module crahses, control it, fire an alert but accept the request until fixed.

        Fail open vs fail closed. Bot detection is a security feature. If the security feature fails, do you disable it and allow unchecked access to the client data? Or do you value Integrity over Availability

        Imagine the opposite: they disable the feature and during that timeframe some customers get hacked. The hacks could have been prevented by the Bot detection (that the customer is paying for).

        Yes, bot detection is not the most critical security feature and probably not the reason someone gets hacked but having “fail closed” as the default for all security features is absolutely a valid policy. Changing this policy should not be the lesson from this disasters.

        • Fushuan [he/him]@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          You don’t get hacking protection from bots, you get protection from DDoS attacks. Yeah some customers would have gone down, instead everyone went down… I said that instead of crashing the system they should have something that takes an intentional decision and informs properly about what’s happening. That decision might have been to clo

          You can keep the policy and inform everyone much better about what’s happening. Half a day is a wild amount of downtime if it were properly managed.

          Yes, bot detection is not the most critical…

          So you agree that if this were controlled instead of open crahsing everything them being able to make an informed decision and opening or closing things, with the suggestion of opening in the case of not detection is the correct approach. What’s the point of your complaint if you do agree? C’mon.

          • groet@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            You don’t get hacking protection from bots

            I disagree. I don’t know the details of cloudflares bot detecion, but there are many automated vulnerability scanners that this could protect against.

            I said that instead of crashing the system they should have something that takes an intentional decision and informs properly about what’s happening.

            I agree. Every crash is a failure by the designers. Instead it should be caught by the program and result in a useful error state. They probably have something like that but it didn’t work because the crash was to severe.

            What’s the point of your complaint if you do agree?

            I am not complaining. I am informing you that you are missing an angle in your consideration. You can never prevent every crash ever. So when designing your product you have to consider what should happen if every safeguard fails and you get an uncontrolled crash. In that case you have to design for “fail open” or “fail closed”. Cloudflare fucked up. The crash should not have happened and if it did it should have been caught. They didn’t. They fucked up. But, i agree with the result of the fuck up causing a fail closed state.

    • foo@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      They’re laying off testers because they think AI can do it all now.

  • PiraHxCx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    9
    ·
    17 hours ago

    I wonder if all recent outages aren’t just crappy AI coding

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      98
      arrow-down
      1
      ·
      17 hours ago

      Shitty code has been around far longer than AI. I should know, I wrote plenty of it.

      • foo@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        But, AI can do the work of 10 of you humans, so it can write 10 times the bugs and deploy them to production 10 times faster. Especially if pesky testers stay out the way instead of finding some of the bugs.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          25
          ·
          16 hours ago

          Shame on them. I mark my career by how long it takes me to regret the code I write. When I was a junior, it was often just a month or two. As I seasoned it became maybe as long as two years. Until finally i don’t regret my code, only the exigencies that prevented me from writing better.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 hours ago

          It’s always depressing when you ask the AI to explain your code and then you get banned from OpenAI

          • 123@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            Who didn’t get hit by the fork bug the professor explicitly asked you to watch out for since it would (back then with windows systems being required to use the campus resources) require an admin with Linux access to eliminate.

            It was kind of fun walking in to the tech support area and them asking your login name with no context knowing what the issue was. Must have been a common occurrence that week of the course.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              It was kind of fun walking in to the tech support area and them asking your login name with no context knowing what the issue was.

              I see this zip bomb was owned by user icpenis, someone track that guy down.

    • AbidanYre@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      17 hours ago

      Humans are plenty capable of writing crappy code without needing to blame AI.

    • renegadespork@lemmy.jelliefrontier.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      17 hours ago

      Indirectly, this was. He said this was a bug in their recent tool that allows sites to block AI crawlers that caused the outages. It’s a relatively new tool released in the last few months, so it makes sense it might be buggy as the rush to stop the AI DoS attacks has been pertinent.

      • iglou@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        9 hours ago

        Obviousness? If you mass layoff your tech staff, you take the risk of more technical failures.

        A smaller staff cannot do the same work as a larger one, and I guarantee you they’re being asked to progress at the same speed. So, the tradeoff is on the quality of the product and the testing, not on the speed of development.

  • DaMummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    11
    ·
    17 hours ago

    Why’s he saying it’s not an attack? Sounds like he’s protesting too much.

    • grumpasaurusrex@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      14 hours ago

      There’s nothing to be gained from Cloudflare lying about this. It honestly makes them look worse if the outage was caused internally vs if it had been due to an attack