I just finished writing a massive comment on Ed Bott’s blog in response to this post and the first 3 comments that followed it. The comment is between 2 and 3 times the length of Ed’s original post, and quite probably way more than he was asking for when he decided to comment on the recent virus troubles he mentions. When I finished writing the comment, I was so utterly pleased with the way I made some of my points and outlined several of the problems faced by Systems Administrators in today’s world that I decided to post it here as well, since it more than qualifies as its own blog entry (from hell?). In order to understand it, please read Ed’s post (it’s short) that I linked to above first. Here goes:

I used to share these same views. Then I became the sole systems administrator for a small (~70 machine) company. It sounds so easy, doesn’t it? It’s just a simple little 2mb patch… what’s so hard about keeping your machines up-to-date?

You say that when you have 1 computer to take care of. What about 10? 20? 50? 100? Then it becomes a significantly different beast. Even deploying Microsoft’s Software Update Server (SUS, now WUS or WSUS.. one of those new acronyms) doesn’t make it foolproof. You still have to take the time to keep up-to-date on security news and then remember to go in and approve newly released and downloaded updates. After that, you have to wait until your next update cycle comes up before client machines will even start to try and download it.

Even if you’re on your toes, sitting and watching for new alerts 24 hours a day, we’re still probably talking at least a 24 hour response time before your update window rolls around again (ours is 3am every morning). Who’s to say even that is fast enough?

The next problem you’ll run into is the randomness of the Windows environment. Oops, for some reason this client decided not to download the patch. Again, unless we’ve got Microsoft’s Systems Management Server (SMS) running, or some other package constantly auditing machines (which no small company is going to have), as well as someone sitting watching this stuff all the time, we’re not even going to know.

Add on pure misconfigurations (which are going to happen, don’t even try to say they aren’t) and other anomalies and it’s bound to happen.

As Chris G. said, hardware firewalls are only going to protect against outside sources. We’ve also got email, laptops, PDAs, USB drives, floppy drives, CD ROM drives, the list goes on and on. In this day and age, with the tools available (and at the prices of some of them), it’s impractical and almost down right impossible to run firewalls on each individual client machine, so once a machine on the network is infected, it will spread like wildfire (a 100mbit - 1gbit full-duplex bandwidth-loving wildfire to be precise).

My point through all of this is that while it sounds so incredibly simple to secure a network against a KNOWN vulnerability, the reality of the situation is far more complex and unpredictable. Sure we would expect large multinational corporations to have the IT staff (and money) to combat all these issues upwards of 98% of the time, but in reality it doesn’t always happen that way.

Besides, for all we know they have. 100% is an unattainable goal, and we have no idea of the scope or impact of these “shutdowns” and “crashes”. This might have been part of their margin of error. It also might have been 3 computers at each facility that just seems like big news when old-school media get ahold of the information from “sources”.

So let’s go easy on these guys and stop the name-calling poo-flinging flame-war before it begins, shall we? They’re just doing their jobs, and for all we know, very very well…

As for Praveen’s comment about Windows bugs: I have never seen any hard proof that there are in fact any more / less bugs in one operating system versus another. Your arguement is a constant stand-by for Open Source advocates, particularly the *nix folk. The reality here is that Windows occupies 95% of the world’s computers. If we actually ratioed everthing out and did our math, we may well find that Linux / Mac OS X / Your Toaster has exactly the same ratio of bugs as Windows in relation to its scale of adoption / publicity.

Like I said about patching above, 100% is an unatainable goal. There will ALWAYS be bugs. Go find some bug trackers on Sourceforge and see how many problems are reported for a simple little open source project. Now invision Windows, 1000s of times more complex. Again, what’s the ratio of bugs to code in comparison to user base and popularity? Is it really that it’s less secure, or is it just more visible, popular, and media-focused? I don’t know, but I have a feeling it’s no less secure or buggy than anything else of its size, complexity and use.

NOTE: As I glance back over at the original entry Ed posted, I see that the bug apparently only affects Pre-SP1 machines. That’s a little more than a 24-hour-earlier patch. Not having SP1 on machines is pretty bad, but some of my other points are still valid (such as scope of the problem and percentage and margin for error).

With that, I’m off to bed. I plan to post this entry to my blog in the morning, since I think I’ve made some good points. If you’re interested in flaming me, please do so there…

And before anyone labels me as a Windows / Microsoft addict, let me clear the air by pointing out that I am posting this comment from my laptop running Fedora Core 4…

And with that, as I said, I bid you goodnight… Bring on the commentary!

Originally published and updated .