COBIT version 5, just released by ISACA, suggests numerous sample metrics in the Enabling Processes document - typically four or five metrics per goal of which there are 17 enterprise goals plus another 17 IT-related goals, giving approximately 150 metrics in total.
For example, supporting the third financial enterprise goal "Managed busines risk (safeguarding of assets)", the following three metrics are suggested:
- Percent of critical business objectives and services covered by risk assessment.
- Ratio of significant incidents that were not identified in risk assessments vs. total incidents.
- Frequency of update of risk profile.
The text introduces the metrics thus: "These metrics are samples, and every enterprise should carefully review the list, decide on relevant and achievable metrics for its own environment, and design its own scorecard system." Fair enough, ISACA, but unfortunately COBIT 5 does not appear to offer any advice on how one might actually do that in practice. How should we determine which metrics are 'relevant and achievable'? What is involved in 'designing a scorecard system'?
Some readers may assume that they should perhaps be using most if not all of the 150 sample metrics. Others may feel less constrained to the examples, but may also assume that 150 is a reasonable number of metrics. We beg to differ.
The PRAGMATIC approach is well suited to this kind of situation. It is quite straightforward to assess and score ISACA's 150 metrics, comparing them alongside suggestions from various other sources in order to identify those that deserve further investigation: we call them 'candidate metrics'. In conjunction with the people who will be receiving and using the metrics to support business decisions, the shortlisted candidate metrics can be further considered using the PRAGMATIC criteria and refined before finally selecting and implementing the very few security metrics that are actually going to make a positive difference.