On the SecurityMetrics.org discussion form, Walt Williams posed a question about the value of 'time and distance' measures in information security, leading to someone suggesting that 'speed of response' might be a useful metric. However, it's bit tricky to define and measure: exactly when does an incident occur? What about the response? Assuming we can define them, do we time the start, the end, or some intermediate point, or perhaps even measure the ranges?
Next month in the NoticeBored security awareness and training program, we're exploring a
new topic: 'incident detectability' concerning the ease and hence likelihood of detection of information security incidents.
Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphs for assessing and comparing risks.
Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphs for assessing and comparing risks.
However, that still leaves the question of how one might measure detectability.
As is my wont, I'm leaning towards a subjective measure using a
continuous scale along these lines:
For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind.
You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.
For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind.
You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.
To those who claim "It's not a metric if it doesn't
have a unit of measurement!", I say "So what? It's still a useful way to understand,
compare and contrast risks ... which is more important in practice than
satisfying some academic and frankly arbitrary and unhelpful definition!" As shown on the sketch, we normally do assign a range of values (percentages) to the scale for
convenience (e.g. to facilitate the discussion and for recording outcomes) but
the numeric values are only ever meant to be indicative and approximate. Scale linearity and scientific/mathematical precision don’t particularly
matter in the risk context, especially as uncertainty is an inherent factor
anyway. It's good enough for government
work, as they say.
Finally, circling back, 'speed of response' could
add yet another dimension to the risk assessment process, or more accurately the risk
treatment part of risk management. I
envisage a response-speed percentage scale (naturally), ranging from 'tectonic or never' up to 'instantaneous', with an
implied pressure to speed up responses, especially to certain types of incident
... sparking an interesting and perhaps enlightening discussion about those
types. "Regardless of what we are
actually capable of doing at present, which kinds of incidents should we
respond to most or least urgently, and why is that?" ... a discussion point that we'll be bringing out in the management materials for April.
No comments:
Post a Comment