I've been pondering information security metrics for some years now, primarily from the angle of figuring out what might be the "few good metrics" actually worth measuring whilst avoiding pitfalls such as reporting stuff that is simply easy to count or measure. I can't say I've truly bottomed-out that line of thought but I'm moving on to consider the issues around reporting metrics, particularly the concept of "visualization". I've been prompted to look into this by a visually appealing representation of the number of US men anticipated to die this year as a result of various causes. The graphic stimulates viewers to explore the numbers, comparing and contrasting figures ... but that's about it. It's left entirely to the viewers to draw their own conclusions. Many will not bother. But does the eye-candy graphic achieve its purpose better than simple lists or tables of mortality figures? Oh yes! It's stimulating instead of boring.
Kaplan and Norton's original balanced scorecard expressed an organization's key metrics in four quadrants. If we were to design an information security balanced scorecard, how many sectors would it have and what metrics would they report? The answer to these questions come down to the issue of how to structure the statistics. Several common options are immediately obvious:
- People, Process, Technology
- Preventive, Detective, Corrective
- Confidentiality, Integrity, Availability
- Threats, Vulnerabilities, Impacts
- Past, Present, Future
... so it's looking like three is the magic number, perhaps a stacked set of 3-way pie charts or Venn diagrams, one layer for each of these structures? Suddenly in my mind's eye I see a multidimensional image that's probably far too complex in practice. Utility, readability and maintainability are all important parameters for the visualization, as well as visual appeal. So, it's back to the drawing board for me.
At this glacial pace, I'll have figured out how to measure, report and use information security metrics in, oh, about five years. Hopefully.