So I responded:
What we have just experienced is a synthesis of multiple, possibly unsolvable, systemic and regulatory problems. This kind of incident can (and does) happen to any and all platforms, but I would argue that Windows is disproportionately impacted because:
(a) competing operating systems were built for more hostile environments like multiuser computers in universities, and
(b) Windows was originally built to support a single user and then extended (whilst under substantial restrictions of maintaining backward compatibility) to support web, and eventually “enterprise” workloads
This gradual evolution has led to (growth of) an ecosystem of third-party tools to address shortcomings in the fundamental Windows platform… but with the consequent rise of “compliance capture” / “enterprise security by designer brand” – where insurance, financial, and even M&A auditors care more to see particular product checkboxes being ticked, rather than security behaviours being practised.
“Oh you have Crowdstrike? Okay, you’re good”, etc.
…and then you have the regulatory prohibitions – where anti-monopoly types stoke public fear of letting companies build isolated “ivory towers” which lock up your data or functionality where you can’t get at it whilst simultaneously ignoring that sometimes you really do want your data in a very locked up, very secure, robust, available, ivory tower.
[Aside: the flip side of this issue is when the FBI complain about not being able to get data out of “criminal” iPhones which they have seized, and you in the media have to ask whether it is a greater general good user for all data to be secure, or else for data to be accessible to third and fourth parties. Me, I lean towards security. But I digress.]
Microsoft argue that they have been prevented by regulators from making necessary radical security change to improve their platform (including locking-out third-party kernel modules) and I broadly agree, but they also started from a weaker security architecture with fewer controls so would also have needed to put in substantial and more radical work to make such change effective – and I’m not aware that we’ve seen evidence of that intention.
So, realpolitik time:
We have global industries which believe – with some cause – that online platforms (Windows included) need to at a moment’s notice to resist some new and horrifying malware… AND that the acceptable cost of such resistance is throwing away one’s own change control mechanisms in favour of putting absolute trust in a third party to deliver flawless software at a daily cadence.
At which point you have to ask: what is the greater threat? (a) surprise malware from North Korea, or (b) somebody eventually pushing flaky software into your computer? We do not have a checkbox for that on our compliance risk-appetite questionnaires, but having spent my early 2000s doing consultancy at investment banks I know that in those environments they will happily run 10-year-old operating systems with 6- or 12-month patching burn-in cycle because reliability and well-understood behaviour are more important in some contexts.
Bugs are inevitable, and if you are an enterprise which assumes that fresh prophylactic perfection will be pushed daily to you, by a third party, then either you do not understand how software actually works, or else you are an academic who works in the field of software correctness where nobody actually does any real work.
Leave a Reply