According to many news outlets (but interestingly, not Panda Labs' blog, which most of the reports cite), a couple of days ago, Panda Labs said MS09-008 doesn't fix one of the vulnerabilities it set out to fix. This complaint may or may not be the same as the one from nCircle, in which the patch installs differently and does not provide future protection if the vulnerability has already been exploited (calling this behavior an incomplete fix seems pretty reasonable to me). There's a whoops or two in here somewhere, be it an unsuccessful fix from Microsoft, Panda publicly disclosing a vulnerability without giving the vendor time to provide a patch, news outlets propagating a bogus story, Panda not posting anything to back up their claim, or my inability to find the actual data Panda posted. Conveniently, I don't need the details to ramble on about patches & security in general: whether this patch in particular had problems or not, security patches have holes, too.
Why is that?
To start with, many security issues are correctness issues. What applies to software in general also applies to security patches: writing bug-free code is hard to the point of impossible (depending on whether you're talking to a formal methods geek). To approximate it, one typically needs extreme attention to detail; very good knowledge of the surrounding code, intended architecture, and behavior of surrounding systems; a full coverage test suite; a good development process; and enough time to carefully design, code and test the software in question. As you can see from this simplified view, there will be quite a lot of work involved. There will be correctness issues in software, including security patches, and some of these correctness issues will cause security problems.
For security issues that go beyond correctness issues, you can take the above list of stuff you need to write high-quality code and increase the existing knowledge, test & time requirements substantially, and add knowledge about the specific security issues; security issues that go beyond correctness are usually more complicated to understand and reproduce than "simple" correctness issues. Complicated tends to mean more bugs, and some of the bugs will be security issues.
Now let's add in the fact that it's a patch.
Even if you only maintain one supported version (i.e. you work for a SaaS ;) ), you're not going to work on that supported version every day. You're mostly going to work on some future release. Chances are good that you have unconscious expectations about what features are available to you, and how the surrounding code acts, based on the code you work with regularly. Quite a lot of this may be new since the supported release, but since your assumptions about its presence are unconscious, you probably don't have a complete list of new features in mind and you may not think to ask yourself whether what you are doing will always work safely given that X is not present. The wider the gap between the supported version & what you're working on now, the more likely you are to slip up in this way.
If you have multiple supported versions, you get to repeat the above for every supported version. And it gets worse: again, unless you work for a SaaS, you have to cover the possibility that not all your previously published patches (security or not) have been installed. Your fix needs to work whatever the patch state, for all your supported versions, which means you have to think it through and test for each supported version & possible patch state.
For most developers, adding new features is way more fun than writing patches. You probably can't wait to get back to whatever you were working on when somebody reported this bug.
And for a security patch?
Security is a specialty & mindset of its own, and it can be hard for non-security developers to understand the issue they're fixing as deeply as they would understand a correctness issue.
There is extra pressure to get a security patch out fast. Distracted people in a hurry make more mistakes.
Typically, a security patch will reduce functionality. Backwards compatibility is a big issue: customers hardly ever want you to take something away. Ideally, you would remove the functionality the attacker can use without affecting the functionality legitimate users use, but this isn't always possible, and then you have some really tricky decisions to make.
Because of all these factors, you can expect security patches to have holes now & then. Sometimes they'll be the same holes, not yet fixed, and sometimes they'll be shiny and different. Personally, I feel the same way as most of the security administrators I know: I'll take the new hole over the old hole every time, and keep on patching as quick as I can.
(Yes, this counts as more fuel for the give-me-an-interim-patch-while-you-do-all-that-work fire.)