When we first met and started discussing information security metrics, Krag and I soon realized we shared the view that there are loads of possible metrics out there. Anyone out shopping for security metrics is spoiled for choice, facing a bewildering array of things they could measure. Far from being short of possible metrics, we face the opposite problem, choosing which of the plethora of metrics on offer to go with.
Most security metrics people propose or recommend specific metrics. The better ones at least make the effort to explain what the metrics are about, and a few take the trouble to justify their choices. Here's a single example, a list of over 40 metrics recommended by Ben Sapiro on the LiquidMatrix blog:
Time to patch; time to detect; time to respond; system currency; time to currency; population size; vulnerability resilience/vulnerable population; average vulnerabilities per host; vulnerability growth rate versus review rate; infection spread rates; matched detection; unknown binaries; failure rates; restart rate; configuration mismatch rate; configuration mismatch density; average password age and length; directory health; directory change rate; time to clear quarantine; access error rates per logged in user; groups per user; tickets per user; access changes per user; new web sites visited; connections to botnet C&C’s; downloads and uploads per user; transaction rates; unapproved or rejected transactions; email attachment rates; email rejection/bounce rates; email block rates; log-in velocity and log-in failures per user; application errors; new connections; dormant systems; projects and without security approval; changes without security approval; average security dollars per project; hours per security solution; hours on response; lines of code committed versus reviewed; and application vulnerability velocity.
That's not a bad list, as it happens, of readily-automated technical/IT security metrics. Ben briefly explains each one, averaging about 30 words per metric. He writes well and manages to squeeze quite a lot of meaning into those 30-odd words, hinting at what the metric really tells you, but inevitably there is far more left unsaid than said - not least, there's the issue of what other metrics Ben may have considered and rejected when compiling his shortlist, and on what basis he chose those 40+ metrics.
If you're not yet convinced, sir, try on these lists, catalogs and sources of security metrics for size: CIS, OWASP, NIST, MetricsCenter, nCircle, ProjectQuant, ThirdDefense ... I could go on, but I'll leave the last word to Debra Herrmann's remarkable Complete Guide to Security and Privacy Metrics, all 800+ pages of it.
It's a bit like a child being spoon-fed medicine. "Here, take this, it's good for you". It's the "Trust me" approach favored by vendors pushing complex technical products on an ignorant, naive or dubious market. To put that another way, there is a strong tendency for metrics proponents to offer solutions (often their pet metrics) without taking the trouble to understand the problems. Worse still, most are implicitly framing or bounding the problem space as a technical rather than a business issue by restricting the discussion to technical metrics derived from technical data sources.
What makes a given metric a good or a bad choice? On the whole, the existing body of research on this topic failed to address this relatively straightforward issue well enough to offer usable, practical advice to busy CISOs, ISMs, ITSMs, risk managers and executives grappling with information security issues. Whereas Andrew Jaquith, Dan Geer, Lance Hayden and others have tackled various parts of the issue, each in their own way, there was definitely something lacking. In particular, we noticed a strong tendency to focus on automated, technical metrics i.e. the statistics spewed forth by most security systems, the logical extreme being SIEM (an expensive technical solution ... for what business problem, exactly?).
We wrote about this at some length in PRAGMATIC Security Metrics. Chapter 5 leads you on a voyage of discovery through a multitude of sources of candidate metrics, while chapter 6 lays out the PRAGMATIC criteria and method for honing a long list down to a short one, while figuring out the problems that your metrics are hopefully going to solve. If you know what questions have to be answered, you know what information you need, hence the metrics all but choose themselves.