Welcome to the SecAware blog

I spy with my beady eye ...

31 May 2013

Portable ICT & BYOD security

[Cynicism: high]

We have just delivered June's NoticeBored security awareness module to subscribers, covering portable ICT, BYOD (Bake|Bury|Bash|Bring Your Own Device|Disaster|Dog), mobile and home working, and various associated matters.

One of those 'associated matters' concerns the social changes that are going on around us, thanks in large measure to the freedom that comes from workers no longer being leashed to the office like so many dogs.  I've been pondering this issue for quite a while now, sitting here in my modest home office looking out over the beautiful New Zealand countryside.   When I think back to the days when I commuted to the city every day to sit in a series of dreary offices and stuffy meeting rooms, looking forward to a chance to escape to a nearby cafe or go for a lunchtime walk in the local park, I wonder how I put up with it - those seemingly endless wasted hours of traffic jams and pointless committees and (in some cases) ignorant, pig-headed bosses trying to tell me how to do the job that I had trained and self-trained for decades to do.  

I'm fascinated by the pre-industrial-age days of skilled craftsmen and tradesmen and women, selling their knowledge and capabilities by the hour, day or job to a number of customers without the need for "employment" as we understand the term today.   In the realm of the "knowledge worker", thanks to portable ICT and networking, there are so many more opportunities for creative collaboration that the whole employer-employee thing seems terribly dated and ridiculously constrained to me.  

Looking back over the past decade or so, I've done some fantastic work and achieved great things with people I've never met in person, and am unlikely ever to meet in the future.  For differing periods and over great physical and cultural distances, we've made productive connections, done stuff, and moved on, with no hint of the anger or resentment that so often accompanies resignation and redundancy.  Instead of petty office politics and power plays, there's mutual respect and admiration, sharing the joy instead of jealously guarding our respective turfs.

The BYOD situation exemplifies the mess we've got ourselves into.  The corporation expects employees who have the temerity to suggest that they might be more productive using modern, up-to-date ICT gizmos instead of those old clunkers in the office that the accountants say have another year to go before being written off, to permit some faceless PC technician to poke around inside their personal property using fully privileged remote management facilities, with no security controls to speak of?  You're having a laugh!  

As far as I'm concerned, nosy, incompetent and malicious MDM admins are every bit as much of a threat to employees' privacy and other personal interests as those naughty haxxors and VXers who might sneak inside a BYOD tablet.  But no, the corporate power balance gives management the big stick.  "Give away your rights by signing this BYOD policy and hand over your admin password, or it's the IBM PS/2 in the corner for you my lad."  What kind of a 'social contract' is that?

There's far more to mobile working than bashing out a company memo on a beige laptop or playing cellphone tag with some other poor sod, en route to the next excruciatingly pointless and demoralizing encounter.



29 May 2013

Hannover/Tripwire metrics part 1

I mentioned the Hannover Research/Tripwire CISO Pulse/Insight Survey recently on the blog.  Now it's time to take a closer look at the 11 security metrics noted in section 5 of the report.  

The report doesn't explain the origin of these 11 metrics.  How and why were they singled-out for the study, from a vast population of possible security metrics?  To be precise, it doesn't actually say that survey respondents were presented with this specific choice of 11 metrics, nor how many were on the list, leaving us guessing about the survey methods.

Furthermore, the report neglects to explain what the succinctly-named metrics really mean.  If survey respondents were given the same limited information, I guess they each made their own interpretations of the metrics and/or picked the ones that looked vaguely similar to metrics they liked or disliked.  

Anyway, for the purposes of this blog, I'll make an educated guess at what the metrics mean and apply the PRAGMATIC method against each one in turn to gain further insight. 

Metric 1: "Vulnerability scan coverage"

Using automated tools to scan the organization's IT systems and networks repeatedly for certain technical issues is a common approach in large organizations to identifying known technical vulnerabilities - old/unpatched software, for example, or unexpectedly active network ports.  The metric refers to 'coverage', which I take to mean the proportion of the organization's IT systems and/or network segments that are being regularly scanned for known technical security vulnerabilities.  

Why would this be the most popular of the 11 metrics in the survey report, apparently used by up to two-thirds of the respondents?  Being naturally cynical, i'd say the fact that the survey was sponsored by Tripwire, a well-known supplier of vulnerability scanners, is a massive clue!

Anyway, let's lift the covers off the metric using the PRAGMATIC approach:
  • Predictiveness: an organization that scores low on this metric is probably unaware of technical vulnerabilities that it really ought to know about, betraying an immature approach to information security, whereas one firmly on top of its technical security vulnerabilities demonstrates a more mature approach ... to that one aspect of IT security anyway.  However, scan coveraqe per se doesn't tell us much about the system/network security - it merely tells us what proportion of our IT systems/networks are being scanned.  The scans themselves might reveal absolutely terrible news, an enormous mountain of serious vulnerabilities that need to  be addressed, whereas the coverage metric looks fabulous, or indeed the converse ("We only scan a small proportion of our systems/networks because the scans invariably come up clean!").  At best, this metric gives an indication of the organization's information security management capabilities, and a vague pointer towards its probable status.
  • Relevance to information security is limited in the sense that known technical system/network security issues are only one type of information security vulnerability.  Patching systems and securing network configurations is a valuable security control, but there are many others.  This metric, like most technical or IT security measures, is fairly narrow in scope.
  • Actionability: on this criterion, the metric scores quite well.  If scan coverage is too low (whatever that means), the response obviously enough is to increase the coverage by scanning a greater proportion of the systems/networks currently being scanned, and/or expanding the range of types of systems/networks being scanned.  There will be diminishing returns and, at some point, little if anything to be gained by expanding the coverage any further, but the metric should at least encourage the organization reach that point.
  • Genuineness: if someone (such as the CIO or CISO) wanted to manipulate the metric for some ulterior purpose (such as to earn an annual bonus or grab a bigger security budget), how could they do so?  Since the metric is presumably reported as a proportion or percentage, one possibility for mischief would be to manipulate the apparent size of the total population of IT systems/networks being scanned, for instance by consciously excluding or including certain categories.  "We don't scan the systems in storage because they are not operational" might seem fair enough, but what about "Development or test systems don't count because they are not in production"?  It's a slippery slope unless some authority figure steps in, ideally by considering and formally defining factors like this when the metric is designed, assuming there is such a process in place.
  • Meaningfulness: aside from the issues I have just raised, the metric is reasonably self-evident and scores well on this point, provided the audience has some appreciation of what vulnerability scanning is about - which is likely if this is an operational security metric, intended for IT security professionals.  Otherwise, it could be explained easily enough to make sense of the numbers at least.  It's quite straightforward as metrics go.
  • Accuracy: in all probability, a centralized vulnerability scanning management system can probably be trusted to count the number of systems/networks it is scanning, although that is not the whole story.  It probably cannot determine the total population of systems/networks that ought to be scanned, a figure that is essential to calculate the coverage proportion.  Furthermore, we casually mentioned earlier that vulnerability scans should be repeated regularly in order to stay on top of changes.  'Regularly' is another one of those parameters that ought to be formally defined, both as a policy matter and in connection with the metric.  At one ridiculous extreme, scanning a given IT system just once might conceivably be sufficient for it to qualify as "scanned" for ever more.  At the opposite extreme, mothballed IT systems might have to be dragged out of storage every month, week, day or whatever and turned on purely in order to scan them, pointlessly.
  • Timeliness: automated scan counts, calculations and presentation should be almost instantaneous.  Figuring out the total number of systems/networks may involve manual effort and would take a bit longer, but this is probably not a time-consuming burden.  With regard to the risk management process, the metric is related to vulnerabilities rather than incidents, hence the information is available in good time for the organization to respond and hopefully avert incidents caused by known technical attacks.
  • Independence and integrity: technical metrics are most likely to be measured, calculated and reported by technical people who often have a stake in them.  In this case, an independent assessor (such as an IT auditor) could confirm the scan counts easily enough by querying the scanner management console directly, and with more effort they could have a robust discussion with whoever calculated the 'total number of systems/networks' figure.  Someone might conceivably have meddled with the console to manipulate the scan counts, but we're heading into the realm of paranoia there.  It seems unlikely to be a serious issue in practice.  The fact that the figures could be independently verified is itself a deterrent to fraud.
  • Cost-effectiveness: the number of systems/networks that are being vulnerability scanned would most likely be available on the management console as a built-in report from the program.  Determining the total number of systems/networks that could or should be scanned would require some manual effort: although the management console may be able to generate an estimate from the active IP addresses that it discovers, offline systems (such as portables) and isolated network segments (such as the DMZ) would presumably be invisible to the console.  In short, the metric can be collected without much expense but what about the other part of the equation, the benefits?  Concerns about its predictiveness and relevance don't bode well.  There's no escaping the fact that vulnerability scanning is a very narrow slice of information security risk management.
On that basis, and making some contextual assumptions about the kind of organization that might perhaps be considering the vulnerability scanning metric, I calculate the PRAGMATIC score for this metric at about 64% - hardly a resounding hit but it has some merit.

This narrow-scope operational metric would of course be perfect if the organization just happened to need to measure vulnerability scanning coverage, for instance if the auditors had raised concerns about this particular issue.  It doesn't hold much promise as a general-purpose organization-wide information security management or strategic metric, however. 

So, that's our take on the first of the 11 metrics.  More to follow: if you missed it, see the introduction and parts two, three, four and five of this series.

SMotW #59: residual risk liability

Security Metric of the Week #59: total liability value of residual/untreated information security risks

This sounds like a metric for the CFO: tot-up and report all the downside potential losses if untreated or residual information security risks were to materialize.  Easy peasy, right?

Err, not so quick, kimo sabe.

In order to report risk-related liabilities in dollar terms, we would presumably have to multiply the impacts of information security incidents with the probabilities of their occurrence.  However, both parameters can only be roughly estimated, hence the metric is subjective and error-prone which naturally cuts down on its Accuracy rating. 

The skills and effort needed to calculate the liabilities, especially with the care needed to address that subjectivity, makes this a relatively Costly security metric too, although arguably there are substantial benefits in doing the analysis, aside from the metric.  

The Actionability rating is depressed since it is unclear what management would be expected to do in response to the metric.  If the value is high, are they supposed to pump more money into information security?  And what if the value is low: is it safe to cut back on the security budget?  Either way, the metric alone does not indicate the extent or scale of the response.  There is no comparator or criterion, except perhaps for prior values, but unless you went to extraordinary lengths to control the measurement process, random variations arising from the subjectivity would generate a lot of noise masking the puny signal.
On a more positive note, the liabilities arising from residual risks are patently Relevant to information security, and in the form of large dollar figures, are likely to be highly Meaningful to management, given the common if crude impression of management that "In the end, it all comes down to money".  Making the effort to express information security risks in dollar terms does at least help position security as a business issue, although there are better ways.

Acme managers rated the metric's overall PRAGMATIC score a disappointing 59%, which effectively put it out of the running in its present form given that  there were several similar but higher-scoring candidate metrics on the table.  

It's not entirely obvious how the inherent weaknesses of this metric might be addressed to improve its PRAGMATIC score.  What, if anything, would you suggest?  Have you actually used a metric similar to this, and if so how did it work out?  We'd love to hear from you.

28 May 2013

Hannover/Tripwire security survey emphasizes culture

"Building a culture of security within the organization as well as compliance with regulations, standards, and policies are the most important security capabilities for executives and non-executives: the surveyed information security managers were most likely to give these capabilities the highest overall importance ranking."
So says Hannover Research's CISO Pulse Survey aka CISO Insight Survey*, a small-scale study on behalf of Tripwire.  Whether you consider the 100 or so mostly North American respondents a valid sample of the population is your decision, but let's just say that their conclusions are "unsurprising".

Unfortunately the report does not explain what 'building a culture of security' actually involves.  It's a shame that the security culture is so often mentioned glibly in such vacuous, throwaway statements.  The concept may gets heads nodding sagely but, in my experience with a few exceptions, information security professionals, managers and executives rarely have much of a clue about how to do it.  It's the elephant in the room.  Everyone agrees that something must be done, but presumably expects someone else to do it!

An information security awareness program is a vital part of establishing and maintaining the security culture provided it is done well - and by that I'm getting at things such as:
  • Being overtly supported by all levels of management, top-to-bottom;
  • Addressing the entire organization, not just "end-users" (a horribly demeaning term, and an IT-centric one at that);
  • Being creative, appealing and motivational;
  • Being topical and current, keeping up with what's hot in this dynamic area;
  • Presenting useful, interesting, well-written content in forms and styles that suit the intended audiences (note the plural: we each have our own communications needs and preferences, so carve up the population into distinct segments rather than trying to paint them all with the same broad brush);
  • Being broadly-based, taking in a wide variety of topics, some of which are tangential but still important in this sphere (compliance being a classic example: compliance with information security and privacy laws is but a small part of the compliance imperative);
  • Being relevant and applicable, promoting information security as a business issue with genuine business value rather than for its own sake.
When I get the chance, I'll be critiquing and scoring the specific metrics mentioned in the report using the PRAGMATIC method, here on the security metrics blog.  Meanwhile, read more on how to build a security culture (including why that is not the ultimate goal), how to measure it and about interpreting survey statistics:

PS As if that's not enough, we've just published a complete security awareness module on social engineering, social networking and human factors which includes a paper on security metrics in this area.

PPS  I did have time to continue the bloggings after this introduction.  By all means take a look at parts onetwothreefour and five of this series.

* The survey is, of course, part of Tripwire's marketing, hence they squeeze us for our contact details prior to releasing the report.  Let's hope they are responsible marketers with an appreciation of our privacy rights.

27 May 2013

Unusual information security metric: number of train passengers

A information security metrics piece in our local newspaper caught my recently.  To be honest, it didn't actually use the word "metric" as such, nor "information security" for that matter, but that's what it was.

Like many others, the train company in Wellington NZ has a problem with fare dodgers.  Some bright spark in their internal audit team, I guess, realized that comparing the number of people who use individual trains with the number of tickets sold would give them a huge clue about which trains and stations should be at the top of the ticket inspectors' hit list.

Counting passengers would be a tedious and error-prone job for a person, but an infra-red beam across the carriage doors would do nicely - particularly as the hardware may well already be installed as part of the door control and safety system.

The automated count will inevitably have errors (e.g. passengers who alight at the wrong stations then rejoin the same train), but provided the counting system is correctly configured and calibrated, the errors should be within known bounds and good enough for the purpose.  Likewise the number of tickets will have genuine errors, for example passengers with season tickets who neglect to swipe them.  The absolute number of passengers traveling is less important than the relative numbers of passengers and tickets: the further apart they are, the more likely something untoward is going on.

I imagine the statistics will be presented graphically, showing a breakdown of the number of passengers and corresponding number of tickets for various journeys.  Those with the greatest discrepancies would naturally be targeted by the inspectors.

I imagine also the graphs will have a few empty slots where the ancient rolling stock breaks down - which hints at another important metric for the railway: service reliability.  Conceivably a greater number of passengers will be prepared to pay their way if the trains were modern, comfortable, fast and reliable.  But perhaps I'm being overly cynical.  Fare-dodgers aren't helping since their payments would help fund the upgrades needed.

Gary Hinson

24 May 2013

Oklahoma tornado scam already circulating

What's the betting that this is a scam?

The gaudy pink box on this screengrab shows the reply-to address is using gmail, not Redcross.org as in the (presumably spoofed) email sender's details.  If this was a legitimate request for funds from the Red Cross (or is it Organizing for Action?), why wouldn't they use their own corporate email address?  My guess is that the 'instructions' that will evidently be given to you if you reply to the message will (a) be malware infected and/or (b) be phishing for credentials or seeking advance fees.  I, for one, am not about to find out.

The only surprising thing about this incident is that I am still surprised that scumbag scammers are yet again picking up on a tragic news story as a lure for gullible victims.  They've done it many times before, and no doubt will do it again.  

[If by some miracle I am wildly mistaken, and this is in fact a genuine begging email from the Red Cross, or indeed Organizing for Action, we need to talk while I eat my hat!]

Gary (Gary@isect.com)

PS  I guess the phisher who warned of "several stormy rainfall" back in August 2011 has been on an intensive English course, or perhaps he's rich enough now to pay for decent translations of his scams.  Or maybe, just maybe, there is more than one scammer on the prowl.

Security metric #58: emergency changes

Security Metric of the Week #58: rate of change of emergency change requests

Graphical example

The premise for this week's candidate security metric is that organizations with a firm grip on changes to their ICT systems, applications, infrastructure, business processes, relationships etc. are more likely to be secure than those that frequently find the need for unplanned - and probably incompletely specified, developed, tested and/or documented - emergency changes.  

Emergency change requests are those that get forced through the normal change review, approval and implementation steps to satisfy some urgent change requirement, short-cutting or even totally bypassing some of the steps in the conventional change management process.  Often the paperwork and management authorization is done retroactively for the most desperate of emergency changes.  

Being naturally pragmatic, we appreciate that some emergency changes will almost inevitably be required even in a highly secure organization, for instance when a vendor releases an urgent security patch for a web-exposed system, addressing a serious vulnerability that is being actively exploited.  Emergency changes are a necessary evil, particularly when the conventional change management process lumbers along.  However, the clue is in the name: emergency changes should not be happening routinely!

Looking at the specific wording of the proposed metric, there are some subtleties worth expanding on.  

First of all, it would be simpler to track and report the number of emergency changes during the reporting period, in other words the rate of emergency changes.  Let's say for the sake of argument that the rate is reported as "12 emergency changes last month": is that good or bad news for management?    Is 12 a high, medium or low value?  What's the scale?  Without additional context, it's impossible to say for sure.  A line graph plotting the metric's value over time (vaguely similar to the one above) would give some of that context, in particular demonstrating the trend.  If instead we measure and report the rate of change of emergency changes, it would be even easier for management to identify when the security situation is improving (i.e. when the rate of change is negative) or deteriorating (a positive rate of change).  For instance, the up-tick towards the right of the rate graph above may cause concern since the rate of emergency changes has clearly increased.  However, the rate of change actually flipped from negative to positive at the bottom of the dip some months earlier, and that would have been a better, earlier opportunity to figure out what was going on in the process.  In this kind of situation, rate of change is a more Timely metric than rate.

Next, note that the proposal is to measure not emergency changes made but emergency changes requested.  The idea is to emphasize that, by planning further ahead, fewer emergency changes need be requested.  Fewer requests, in turn, means less work for the change management committee and a greater opportunity to review the emergency changes that do come through.  Deliberately moving the focus upstream in the process from 'Make change' to 'Request change' again makes the metric more Timely.

Finally, consider what would happen if this metric was implemented without much thought and preparation, simply being used by management to bludgeon people into improving (i.e. reducing) the rate of change of emergency change requests.  The intended outcome, in theory, is obviously to improve advance planning and preparation such that fewer emergency changes are required: the unintended consequence may be that, in practice, roughly the same number of changes are put through the process but fewer of them are classed as emergencies.  Some might be termed urgent or obligatory if that would deflect management's wrath while still ensuring that the changes are pushed through, much as if they had been called emergencies in fact.  This is an example of the games people play when we start measuring their performance, especially if we use the numbers as a big stick to beat them.  In this case, the end result may be a worsening of information security since those urgent or obligatory changes may escape the intense, focused review that emergency changes endure.  There are things we could do to forestall the subversion of the metric, such as:
  • Using complementary metrics (e.g. the rate of all types of change);
  • Explicitly defining the classifications to be applied, along with compliance effort to make sure they are being used correctly;
  • Improving the efficiency and speed of the regular change management process (a spin-off benefit of doing something positive for emergency changes) ...
... and the best time to start all that is ahead of implementing the metric, hinting at the 'metric implementation process' (read more on that in the book).  

To close off this blog piece, let's take a quick look at Acme management's opinion of the metric:


They liked it: 72% is a pretty good score.  The PRAGMATIC ratings are fairly well balanced, although there is still some room for improvement.  Management were not entirely impressed at the metric's ability to Predict Acme's information security status since there are clearly many other factors involved besides the way it handles emergency changes.  On the other hand, they thought the metric had Meaning (particularly having discussed the things we've mentioned here in the blog, in the course of applying the PRAGMATIC method) and was Cost-effective - a relatively cheap and simple way to get a grip on the change management process, with benefits extending beyond the realm of information security.  [That's a topic to discuss another time: PRAGMATIC security metrics are not just good for security!] 

The Timeliness rating was not quite as high as you might have thought, given the earlier discussion, for the simple reason that Acme was not handling a huge number of changes as a rule.  Therefore, the metric only made sense if measured over a period of at least one month, preferably every two or three months, inevitably imposing a time-lag and perhaps causing the hysteresis effect noted in the book (pages 91-93).

15 May 2013

Security metric #57: % of information assets classified

Security Metric of the Week #57: Proportion of information assets correctly classified

Patently, this metric relates to the classification of information, an important form of control.  

The assumption underlying classification is that the majority of an organization's information is neither critical nor sensitive.  It is therefore wasteful to secure all the information to the extent that is appropriate for the small amount that is highly critical or sensitive.  Likewise, the basic or baseline controls that are appropriate for most information are unlikely to be sufficient for the more critical or sensitive stuff.

The classification process can be as simple or as complicated as you like, according to the number of classes.  Taken to extremes:
  • A single classification level such as "Corporate Classified" could be defined in which case everything would end up being protected to the same extent.   
  • More likely, certain important items of information would be deemed "Corporate Classified" with the remainder being "Corporate Unclassified", meaning a two-level classification scheme (OK, three if you count the information assets that have yet to be classified!).
  • At the opposite end of the scale, the classification could be so granular in detail that many classes contain just a single information asset with a unique set of security controls for that specific asset.
  • Classification is essentially a pointless exercise at both extremes.  It's value increases in the middle ground where 'a reasonable number' of classes are defined, each containing 'a reasonable number' of information assets.  It's up to you to determine what's reasonable!
The driver for classification is also a variable.  Although we mentioned 'criticality' and 'sensitivity', those are not the only parameters.  For example, picture a 3x3x3 Rubik's cube with low-medium-high categories for confidentiality, integrity and availability, or a classification scheme that depends on the value of the information, howsoever defined.  

Military and government classification schemes appear quite simple in that they are largely or exclusively concerned with confidentiality (e.g. Secret, Top Secret, Ultra), but there are numerous wrinkles in practice such as subtly different definitions of the classes by different countries, and subsidiary markings identifying who is authorized to access the information. 

Corporate classification schemes commonly distinguish personal information, trade secrets, other internal-use information and public information, but again there are numerous variations.

Classifying information involves two key steps: 
  1. The information is assessed to determine the appropriate class using defined classification criteria.  
  2. Information security controls deemed appropriate for the particular classification level are applied.  
This week's example metric concerns step 1, and is only indicative of step 2 if we assume that a sound process is being followed religiously.   Step 2 could be measured independently using a suitable compliance metric.

The illustrative graphic above shows an hypothetical organization systematically assessing and classifying its information assets, measuring and reporting the metric month-by-month.  The graph plots "Proportion of information assets correctly classified" by month.  The simple Red-Amber-Green color-coding makes it obvious that things have improved substantially since the start of the initiative, with two step-changes in the levels presumably representing discrete projects or stages that made significant progress.

Actually measuring this metric could be something of a mission if you insist on doing so accurately (more on that point below).  First, since you are reporting a proportion, you need to determine the size of the whole, in other words how many information assets are there to be classified, in total?  Answering that further requires clarity over what constitutes an information asset.  Leaving aside the question of whether the term includes ICT hardware and storage media, or just the information/data content, the unit of analysis is also unclear.  For instance, does a customer database containing 1,000 customer records each with 100 fields count as one information asset, or 100, or 1,000, or 100,000, or some other number?   The answer is not immediately obvious.

In the same vein, the metric explicitly refers to assets being 'correctly' classified implying that, strictly speaking, someone should check the veracity of the classifications - potentially a huge amount of work and additional cost just for the sake of the metric.  

On the other hand, clarity over 'information asset' and 'correctly classified' may have value to the organization's information security beyond the metric.

Anyway, let's pick up on that point about the accuracy requirement for this metric.  Since we are reporting a proportion, the absolute numbers are less important than their relative quantities.  Rather than accuracy, consistency of the measurement approach is the primary concern.  With that in mind, it doesn't particularly matter how we define 'information asset' or 'correctly classified' just so long as the definition remains the same from month to month.  For various other reasons, it may occasionally be necessary to alter the definitions, in which case we should probably re-base prior values in order to maintain consistency of the metric.

Another big advantage of reporting a proportion is that it is possible to select and measure a representative sample of the population - 'representative' being the crucial term.  We're not going to discuss sampling methods today, though. If you need more, there are brief notes about sampling in PRAGMATIC Security Metrics, while any decent statistics text covers it in laborious detail.

The excellent PRAGMATIC ratings indicate this metric is a hit for Acme Enterprises Inc:


In discussing various candidate metrics, Acme's managers were particularly impressed with this one's Actionability and clarity of Meaning (notwithstanding the notes above - presumably they already had a clear picture in the areas mentioned).   Driving up the proportion of information assets correctly classified was seen as a valid and viable goal to improve information security - not so much a goal in itself but a means of achieving a general security improvement for Acme as a whole, on the reasonable assumption that, following classification, security resources would be applied more rationally to implement more appropriate security controls.

8 May 2013

Security metric #56: embarrassment factor

Security Metric of the Week #56: embarrassment factor

This naive metric involves counting the privacy breaches and other information security incidents that become public knowledge and so embarrasses management and/or the organization.  The time period corresponds to the reporting frequency - for example it might be calculated and reported as a rolling count every 3-12 months, depending on the normal rate of embarrassing incidents.  

In bureaucratic or highly formalized organizations, it would be a challenge even to define what constitutes 'embarrassing', although most of us can figure it out for ourselves without getting too anal about it.

The metric's purpose, of course, is to reduce the number of embarrassing breaches/incidents that occur, which may involve reducing the rate of breaches/incidents and/or reducing the extent to which they are embarrassing.    With that end in mind, the precise definition of 'embarrassing' doesn't actually matter much, just so long as the audience appreciates that the metric fairly indicates the underlying trend.  Annotating the graph to remind viewers about specific incidents should have the desired effect.

In PRAGMATIC terms, ACME management rated this metric at 54%, in other words it would be unlikely to make the cut in their Information Security Measurement System or Executive Security Dashboard.  However, this is such a simple, easy and cheap metric to generate that the CISO might like to keep an informal tally of embarrassing incidents for his/her own purposes.  So long as the trend remains positive, the metric has little impact.  On the other hand, if ACME experiences a rash of embarrassing incidents, mentioning the metric's adverse trend could be an opportunity for the CISO to raise the matter with senior management.  

Sometimes, getting things on the agenda is half the battle.

3 May 2013

2013 Information Security Breaches Survey

The latest Information Security Breaches Survey is required reading if you care about information security risks.  The survey, commissioned from PwC by the British Government's Department for Business, Innovation and Skills, takes place every couple of years or so.  The statistics are useful ... provided you take the trouble to think carefully about what you are being told.

Take for instance the following graphs and the associated commentary on page 6 of the technical report:

"Having a security policy is just the start; to prevent breaches, senior management need to lead by example and ensure staff understand the policy and change their behaviour.  Less than a quarter of respondents with a security policy believe their staff have a very good understanding of it; 34% say the level of understanding is poor.  There's a clear payback from investing in staff training.  93% of companies where the security policy was poorly understood had staff-related breaches versus 47% where the policy was well understood.  Worryingly, levels of training haven't improved much - 42% of large organizations don't provide staff with any ongoing security awareness training, and 10% don't even brief staff on induction.  Many instead seem to wait until they have a serious breach before training staff."
That's a whole lot of information to take in for starters but let's take a closer look:
  • The two graphs represent answers from about 150 respondents each (not necessarily the same people) out of the 1,402 who took the survey.  Page 1 of the report told us the margin of error for 100 respondents was about 10% at the 95% confidence level, so without doing the calculation, it is not unreasonable to assume a similar level of error - maybe 8% - with 150 respondents.   
  • Page 1 also told us a little about the survey respondents.  Roughly half of the respondents were based in London and South-East England.  The survey is therefore biased towards that part of the world. 
  • The respondents were in roughly equal proportions infosec pros, IT pros and business managers/execs.  It seems fair to assume they have a reasonable understanding of their organizations' information security status.  Infosec pros tend to be risk-averse by nature, while business managers/execs see risk in a more positive light, so perhaps those opposing biases cancel out?  It's impossible to say for sure without more information.
  • Figure 9 separates out the numbers for large and small organizations in this year's survey, but those two categories were not identified separately in all the previous reports, making it tricky to compare.  The report indicates that the proportion of small businesses having a formally documented information security policy has fallen consistently from 67% in 2010, through 63% in 2012, to 54% now.  Given the ~8% margin of error, the differences may not be significant. 
  • Figure 10 has similar issues: the differences may not be significant.  Nevertheless, it is interesting that about one third of the respondents only cover awareness of security threats at induction (orientation) time, while about half have a programme of ongoing education (whatever that means!  Requiring staff to attend an awareness class once every year or so presumably qualifies as 'ongoing education' but we know just how ineffective that approach can be).
  • "Having a security policy is just the start" could be simply a throwaway phrase to kick off the commentary, although it clearly implies a sequence of events.  Furthermore, the text implies that policy is an important vehicle for changing behaviours.  Personally, I'm not totally convinced on either point - there are some unanswered questions there that could have been addressed by the survey or other research ... which reminds me: there are few if any references to other sources of information and statistics in the report.  Some of the topics discussed in the report have undoubtedly been examined by rigorous scientific studies, so why aren't they referenced?
  • The commentary provides some additional statistics, although the report's authors have been selective.  Stating "Less than a quarter of respondents with a security policy believe their staff have a very good understanding of it; 34% say the level of understanding is poor." gives the impression that most respondents think employees don't understand their policies, but that is an interpretation of data that are incompletely presented in the report.
  • We are none the wiser on how PwC concluded that "Many instead seem to wait until they have a serious breach before training staff."  Maybe there was one or more survey questions along these lines.  Maybe PwC reached this conclusion on the basis of their audit and consultancy work, independently of the survey.  Maybe the reports authors just made it up to fill a gap - pure conjecture perhaps.  We're left guessing. 

While I have only discussed two graphs and about 130 words of commentary, a small part of the report's 19 or so pages, hopefully this has given you a clue about what I meant by 'thinking carefully about what we are being told' and, for that matter, what we are not being told.  The survey is well worth reading, although I recommend reading it critically to get the most value from it.  


PS  I wrote about security surveys over on the PRAGMATIC metrics blog some while back, concluding with "a very pragmatic bottom line: published security surveys are, on the whole, good enough to be worth using as security metrics.  While many of us take them at face value, they are even more valuable if you have the knowledge and interest to consider and ideally compensate for the underlying issues and biases, thinking about them in PRAGMATIC terms."