Bruce Schneier summed it up well: the good guys have to secure all the doors and windows; the bad guys only have to find one. In a nutshell, that’s why security is hard. Real-world security has to deal with that problem all the time.
If you think of writing software as filling a feature space, sometimes you accidentally provide functionality that you didn’t intend to, while providing what you meant to. You meant to provide an exhaust port but you
also created a vulnerability.
In other words, it satisfies the requirement that caused it to be created, but it can also be abused. Most software people spend all their time filling the feature space with functionality and calling the work done when simple manual tests succeed, but security consists of trimming functionality – removing the bad stuff – to make it exactly match the requirements. This is the
opposite of normal “just get it working” coding activity.
The worst part is that you can’t trivially see evidence of a computer exploit the way you can see that someone has forced your front door open, or that your kitchen has an ant problem.So you have to (a) secure all the windows and doors (b) constantly look for new vulnerabilities and close them while trying to fulfill the main thing you’re paid for which is to increase functionality and (c) build all sort of tools to tell you when something goes wrong, or else you’ll never know.
This is just how the world is, and it’s not going to get magically easier through technology, although technological changes can help with specific vulnerabilities. It takes a commitment to a certain level of resources if you want to build something and attain a certain level of security.We hear all the time about companies that didn’t make that commitment, either because they didn’t understand the risk they were taking, or because they didn’t estimate it correctly (or honestly could not afford the resources needed to prevent the incident). We never hear about companies that get it right. That’s part of the problem, too: if you’re thinking that you can’t afford to hire security gurus and do the things they tell you to do, then hearing that everybody else is having security problems (especially big, wealthy companies) provides cover for your decision not to hire that security guru.
I think that the only way out of this is to increase accountability. Right now, identity theft is an externality for most companies; users pay the price when their identity is stolen, and companies who mishandle their data are embarrassed, but that’s where it ends. Sometimes, it’s hard to clearly punish a company who mishandles your data, since often the reason they had it was that they provided a service that’s hard to replace. I don’t know if class-action lawsuits would lead to anything other than a ToS that says “we can mishandle your data and you can’t sue”.
Is information security a competitive differentiator? Not yet. Only when this becomes an issue that people really care about will they force their service providers to care about it too.