A website security survey by White Hat Security makes the point that 'a comprehensive metrics program' is valuable:
"The tactical key to improving a web security program is having a comprehensive metrics program in place – a system capable of performing ongoing measurement of the security posture of production systems, exactly where the proverbial rubber meets the road. Doing so provides direct visibility into which areas of the SDLC program are doing well and which ones need improvement. Failure to measure and understand where an SDLC program is deficient before taking action is a guaranteed way to waste time and money - both of which are always extremely limited."
Naturally, we agree with them that a 'comprehensive metrics system' (whatever that might be) is A Good Thing ... but it's not entirely clear to me how they reached that particular conclusion from the survey data. Worse still, the survey design begs serious questions, like for example whether 79 respondents is sufficient to generate statistically meaningful data, how those 79 respondents (and presumably not others) were selected, and exactly what they were asked ...
If you've been following our series about the Hannover/Tripwire survey (the introduction followed by parts one, two, three, four and five) this is an opportunity to think through the same kind of issues in the context of another vendor-sponsored survey.
Once again, I'd like to point out that I'm not saying such reports are worthless, rather that you need to read them carefully to counteract their natural bias. It's a rare vendor-sponsored survey that doesn't have an agenda and/or serious flaws in the methodology, analysis and reporting. Recognizing that is half the battle.
To be fair to White Hat Security, the report does outline some of their methods towards the end, mostly relating to their commercial website security assessment service, although the survey of 79 respondents is not well described.
Personally, I enjoy reading surveys to find out which metrics the authors have chosen to measure their chosen subjects, to learn both good and bad practices concerning experimental design etc., and to grab the odd soundbyte such as the paragraph above (quoted out of context, I admit) for my own biased purposes. Vendor-sponsored studies may or may not be scientifically sound, but so long as they make us think about the underlying issues, that's better than nothing, isn't it?