Welcome to the SecAware blog

I spy with my beady eye ...

8 May 2012

PRAGMATIC metrics from security surveys

Like most of its kind, the latest information security breaches survey is stuffed with security-related statistics (metrics), mostly used to identify issues, compare trends relative to previous surveys and to contrast responses between certain categories of organizations.  Some of them could potentially be adapted for use as security metrics within one organization, but which (if any) would make worthwhile corporate security metrics?  The PRAGMATIC method gives us a rational way to address the issue.

Suppose, for example, that management is concerned about the organization's security policy - or rather its policies since there are several in fact.  Maybe there is a general feeling that, although the policies are formally written and mandated, employees are paying scant attention to their security obligations.  Are there any metrics in the breaches survey that we might use or adapt for internal corporate use?

The breaches survey tells us on page 6: "Possession of a security policy by itself does not prevent breaches; staff need to understand it and put it into practice.  Only 26% of respondents with a security policy believe their staff have a very good understanding of it; 21% think the level of staff understanding is poor.  Three-fifths of large organisations invest in a programme of security awareness training, up by 10% on 2010 levels; less than half of small businesses, however, do this.  The survey results indicate a clear payback from this investment; 36% of organisations that have an ongoing programme feel their staff have a very good understanding of policy, versus only 13% of those that train on induction only and 9% of those that do nothing.  Similarly, only 10% of organisations with an ongoing programme feel their staff have a poor understanding, versus 36% of those that train on induction and 49% of those that do nothing.  There is some industry variation, with the property and construction sector least mature.  Sometimes, it takes a breach before companies train their staff."

Two metrics are implied by that paragraph:
  1. Extent of employee understanding of the security policies; and
  2. Amount of investment in security awareness training.
For the first metric, the survey measured respondents' opinions, presumably using a Likert scale, something along the lines of: "How well do you believe employees understand the security policies: (A) Not at all; (B) Poorly; (C) So-so; (D) Quite well; or (E) Completely?"  [This is not the actual question they asked - I didn't see the actual survey questionnaire so I'm guessing.]  We might consider using this kind of approach to survey opinions within our organization, although there are lots of issues to take into account when designing any kind of survey, such as:
  • Who will we survey?  Which kinds of people, and how many of them?  Do we intend to distinguish and contrast responses from different groups or types of respondent, or is it OK to lump them all together?  Should respondents be allowed to remain anonymous?
  • How many response options should we offer, and how should they be worded, precisely?
  • Should the responses be in ascending or descending alphabetical order?  Or mixed order?  The same order on every survey, or randomized?
  • Should we allow for responses that are off-the-scale, or intermediate values?  Will we collect respondents' comments or explanations?
  • What else do we also need to know?  While we are at it, are we going to ask a bunch of questions (as is normal for a survey), or keep this simple, perhaps just the one question (a poll)?
  • Should this be administered as a self-selection survey, perhaps on the corporate intranet, or should someone physically go around asking employees, or email them, or phone them, or send them forms?
  • Should we offer incentives to encourage more responses?  What incentives are appropriate?  How might this affect the validity of the statistics?
  • Aside from the data collection itself, who will analyze the data?  How?  Which statistics are the most appropriate?
  • When should we conduct the survey?  When is the best time?  How long should we allow?  Should we do it once or more than once - regularly or in an ad hoc manner?
  • How will the survey results be used?  Will they be in a report, a presentation, online, used for background information or directly for decision support?  Line graphs, bar charts, pie charts, probabilities or what?
  • How much should we spend on the survey? ...
... That cost question begs several deeper ones: why are we measuring this?  Do we really understand what issues concern us?  Will a survey give us usable information and what will we do with the results?  And most of all, what determines whether the value of the information from this metric will outweigh the cost of collecting it? 
The second metric seems pretty straightforward, although in practice it is surprisingly difficult to put an accurate and precise figure on the amount of most investments.  However, a rough estimate may be all we really need (Douglas Hubbard makes this point very well, at length, in "How to measure anything" - we will review the book soon).

OK, moving on, let's now consider the PRAGMATIC scores for these two metrics.  Cost-effectiveness is definitely an issue for metric 1, particularly if we intend to go ahead with a manually-administered survey, survey lots of people and/or offer substantial incentives.  There are also doubts concerning its Relevance (how well does it reflect information security?  Isn't it just one of many factors?), Meaningfulness (would we need to spend time explaining the results to the intended audience/s, or risk their misunderstanding?), Accuracy (depends heavily on the survey approach and the number of responses), Genuinness (might the numbers be manipulated deliberately by someone with an ax to grind?), Independence (both in terms of those we are surveying, and who conducts, analyzes and presents the results), Actionability (is it entirely obvious what ought to be done if the data are negative, or for that matter positive?) and Predictability (we may believe there is a causative link to the organization's security status, but are we certain about that?).  

Metric 2, in contrast, could turn out to be much cheaper - perhaps awareness and training expenditure is already measured by Finance, for some other purpose.  Maybe it can simply be estimated from the budgets and project expenses in this area.  The metric's Relevance, Predictability, Independence, Actionability etc. would also have to be weighed-up in scoring this metric, but we leave that as an exercise for you.

While the actual PRAGMATIC numbers in a specific organization depend on these and other factors, for the sake of this blog, let's assume metric 1 scores 44% while metric 2 scores, say, 67%.  On this basis alone, metric 2 clearly appears to be the better metric - however, we are not yet necessarily ready to go ahead with metric 2.  In reality, there are many other possible metrics, and many variations on any one metric, that we perhaps ought to consider.  In some ways, these two metrics could be considered complementary, hence we might even decide to use them both.  Or neither.

Most of these issues could be resolved through a deeper understanding of management's security goals and the questions that the metrics are intended to address.  We might need to explore the data gathering and statistical techniques in more depth, and so on.  However, the PRAGMATIC method has at least prompted us to think more deeply about what we are trying to achieve, and helped us analyze some candidate metrics.  We have developed a richer appreciation of these metrics in the course of the analysis and, just as importantly, insight into our security metrics requirements.  The PRAGMATIC analysis is often more valuable than the actual PRAGMATIC scores.

There is of course much more detail on the PRAGMATIC method in our book ('in press').  There's a whole chapter, for example, about selecting a coherent suite - a measurement system - comprising mutually-supportive metrics, that we'll no doubt bring up in future blog items.  Until the book is released, however, you'll have to glean what you can from the blog, browse the SecurityMetametrics website, come to one of our conference presentations (e.g. AusCERT or SANS Security West), read other security metrics books and articles, raise this on the SecurityMetametrics discussion forum or contact us directly.

The UK information security breaches survey that prompted this blog item is excellent, one of the best, but there are many other security surveys and loads more sources of inspiration for security metrics.  That's something else we'll blog about in due course.  So much to say, so little time ...

No comments:

Post a Comment