Welcome to NBlog, the NoticeBored blog

You don't need eyes to see: you need vision

May 25, 2015

Low = 1, Medium = 2, High = 97.1


Naïve risk analysis methods typically involve estimating the threats, vulnerabilities and impacts, categorizing them as low, medium and high and then converting these categories into numbers such as 1, 2 and 3 before performing simple arithmetic on them e.g. risk = threat x vulnerability x impact.

This approach, while commonplace, is technically invalid, muddling up quite different types of numbers:
  • Most of the time, numeric values such as 1, 2 and 3 are cardinalnumbers indicating counts of the instances of something. The second value (2) indicates twice the amount indicated by the first (1), while the third value (3) indicates three times the first amount. Standard arithmetic is applicable here.
  • Alternatively, 1, 2 and 3 can indicate positions within a defined set of values - such as 1st, 2nd and 3rdplace in a running race. These ordinal values tell us nothing about how fast the winner was going, nor how much faster she was than the runners-up: the winner might have led by a lap, or it could have been a photo-finish. It would be wrong to claim that the 3rd placed entrant was “three times as slow as the 1stunless you had additional information about their speeds, measured using cardinal values and units of measure: by themselves, their podium positions don’t tell you this. Some would say that being 1st is all that matters anyway: the rest are all losers. Standard arithmetic doesn't apply to ordinals such as threat values of 1, 2 or 3.
  • Alternatively, 1, 2 and 3 might simply have been the numbers pinned on the runners’ shorts by the race organizers. It is entirely possible that runner number 3 finished first, while runners 1 and 2 crossed the line together. The fourth entrant might have hurt her knee and dropped out of the race before the start, leaving the fourth runner as number 5! These are nominals, labels that just happen to be digits or strings of digits. Phone numbers and post codes are examples. Again, it makes no sense to multiply or subtract phone numbers or post codes. They don’t indicate quantities like cardinal values do. If you treat a phone number as if it were a cardinal value and divide it by 7, all you achieved was a bit of mental exercise: the result is pointless. If you ring the number 7 times, you still won’t get connected. Standard arithmetic makes no sense at all with nominals.
When we convert ordinal values such as low, medium and high, or green, amber and red, risks into numbers, they remain ordinal values, not cardinals – hence standard arithmetic is inappropriate. If you convert back from ordinal numbers to words, does it make any sense to try to multiply something by "medium", or add "two reds"? “Two green risks” (two 1’s) are not necessarily equivalent to “one amber risk” (a 2). In fact, it could be argued that the risk scale is non-linear, hence “extreme” risks are materially more worrisome than most mid-range risks, which are of not much more concern than low risks. Luckily for us, extremes tend to be quite rare! As ordinals, these risk numbers tell us only about the relative positions of the risks in the set of values, not how close or distant they are – but to be fair that is usually sufficient for prioritization and focus. Personally, a green-amber-red spectrum tells me all I need to know, with sufficient precision to make meaningful management decisions in relation to treating the risks.

Financial risk analysis methods (such as SLE and ALE, or DCF) attempt to predict and quantify both the probabilities and outcomes as cardinal values, hence standard arithmetic applies … but don’t forget that prediction is difficult, especially about the future (said Neils Bohr, shortly before losing his shirt on the football pools). If you honestly believe your hacking risk is precisely 4.83 times as serious as your malware risk, you are sadly deluded, placing undue reliance on the predicted numbers.

May 24, 2015

Shining the spotlight on critical controls

Many information security controls that are intended to mitigate significant business- and/or safety-critical information risks are themselves critical. If critical controls are missing, ineffective, fail in service, or are disabled (whether accidentally or deliberately), the associated risks are more likely to materialize, leading to unacceptable impacts. Therefore, relative to less- or non-critical ones, critical controls deserve additional investment and attention throughout their lifecycle. 

For examples, critical controls should ideally be:
  • Identified as such, implying that controls should be systematically measured as to their criticality, and ranked or categorized accordingly in order to identify the most critical ones that deserve additional effort;
  • Carefully considered, specified and documented in detail;
  • Designed, developed and tested thoroughly by experienced professionals, applying sound security principles such as defense-in-depth;
  • Resilient and fail-safe or fail-secure in nature e.g. supported by additional controls to limit the damage and raise the alert if they were to weaken or fail;
  • Authorized by senior management, provided they have sufficient assurance as to their effectiveness and suitability;
  • Monitored routinely or continuously for effectiveness, triggering alerts/alarms at the earliest opportunity (wherever possible before serious incidents occur);
  • Used and managed properly e.g. with extra checks to prevent the implementation of unauthorized or inappropriate changes that might harm or threaten them in some way;
  • Tested, checked or audited more often and more thoroughly;
  • Proactively maintained;
  • Understood to be, and treated as, 'special' as in highly valuable and worth protecting.

That's all straightforward and obvious to me, yet I'm struggling to think of any standards, guidelines etc. in the information risk and security context that explicitly highlight the concept of control criticality.

Have I simply missed them?  Or is this a blind spot for the profession?

Regards,

May 16, 2015

Metrics to govern and manage information security

Section 9.1 of ISO/IEC 27001:2013 requires organizations to 'evaluate the information security performance and the effectiveness of the information security management system'.  The standard doesn't specify precisely what is meant by 'information security performance' and '[information security?] effectiveness' but it gives some strong hints:
"The organization shall determine:
a) what needs to be monitored and measured, including information security processes and controls;
b) the methods for monitoring, measurement, analysis and evaluation, as applicable, to ensure valid results;
c) when the monitoring and measuring shall be performed;
d) who shall monitor and measure;
e) when the results from monitoring and measurement shall be analysed and evaluated; and
f) who shall analyse and evaluate these results."
The standard specifies (much of) the measurement process without stating what to measure i.e. which metrics.  No doubt the committee would argue that it is not possible to be specific about the metrics since each organization is different - and there's a lot of truth in that - but it's a shame they didn't explain how to select metrics or offer a few examples ... which is where our security awareness paper originally delivered in August 2008 picks up the pieces.

We drew on the IT Governance Institute's advice on information security governance for inspiration, suggesting metrics corresponding to the four aspects identified in the ITGI paper (governance outcomes; knowledge & protection of information assets; governance benefits; and process integration).

[The original hyperlink to the ITGI paper now gives a 404 page-not-found error, unfortunately.  It was a good paper.  Perhaps they moved or updated it?]

May 7, 2015

Infosec & risk management metrics

We've just republished the next in the series of management-level security awareness papers on metrics.  The latest one lays out a range of metrics for information security and risk management.

Leaving aside the conventional metrics that are typically used to manage any corporate function, the paper describes those that are peculiar to the management of information risk and information security, with an emphasis on business-focused metrics.

I spent last week teaching a CISM course for ALC in Sydney.  The business and risk focus is a unifying thread throughout CISM, from the governance and strategy angle through risk and security management to incident management.

In contrast to courses covering the more technical/IT aspects of information security intended for mid- to low-level information security professionals with operational responsibilities, CISM is intended for Information Security Managers and Chief Information Security Officers with governance, strategic and management responsibilities.  It promotes the value of elaborating on business objectives that are relevant to information risk and security management, and using those to drive the development and delivery of a coherent business-aligned risk-driven information security strategy.  Metrics are of course integral to the CISM approach, particularly governance and management metrics similar to those in the awareness paper.