The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.

The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    7
    ·
    10 小时前

    So I work in the IT department of a pretty large company. One of the things that we do on a regular basis is staged updates, so we’ll get a small number of computers and we’ll update the software on them to the latest version or whatever. Then we leave it for about a week, and if the world doesn’t end we update the software onto the next group and then the next and then the next until everything is upgraded. We don’t just slap it onto production infrastructure and then go to the pub.

    But apparently our standards are slightly higher than that of an international organisation who’s whole purpose is cyber security.

    • IphtashuFitz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      58 分钟前

      You would do well to go read up on the 1990 AT&T long distance network collapse. A single line of changed code, rolled out months earlier, ultimately triggered what you might call these days a DDoS attack that took down all 114 long distance telephone switches in their global network. Over 50 million long distance calls were blocked in the 9 hours it took them to identify the cause and roll out a fix.

      AT&T prided itself on the thoroughness of their testing & rollout strategy for any code changes. The bug that took them down was both timing-dependent and load-dependent, making it extremely difficult to test for, and required fairly specific real world conditions to trigger. That’s how it went unnoticed for months before it triggered.

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      30
      ·
      8 小时前

      Their motivation is that that file has to change rapidly to respond to threats. If a new botnet pops up and starts generating a lot of malicious traffic, they can’t just let it run for a week

      • unexposedhazard@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        edit-2
        7 小时前

        How about an hour? 10 minutes? Would have prevented this. I very much doubt that their service is so unstable and flimsy that they need to respond to stuff on such short notice. It would be worthless to their customers if that were true.

        Restarting and running some automated tests on a server should not take more than 5 minutes.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        5 小时前

        There are technical solutions to this. You update half your servers, and then if they die you just disconnect them from the network while you fix them and then have your own unaffected servers take up the load. Now yes, this doesn’t get a fixout quickly, but if you update kills your entire system, you’re not going to get the fix out quickly anyway.

    • codemankey@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      9 小时前

      My assumption is that the pattern you describe is possible/doable on certain scales and in certain combinations of technologies. But doing this across a distributed system with as many nodes and as many different nodes as CloudFlare has, and still have a system that can be updated quickly (responding to DDOS attacks for example) is a lot harder.

      If you really feel like you have a better solution please contact them and consult for them, the internet would thank you for it.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        5 小时前

        They know this, it’s not like any of this is a revelation. But the company has been lazy and would rather just test in production because that’s cheaper and most of the time perfectly fine.