Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Jul 13, 2018

NBlog July 13 - ISO/IEC 27001 Annex A status


I've just completed an internal audit of an ISO27k ISMS for a client. By coincidence, a thread on ISO27k Forum this morning brought up an issue I encountered on the audit, and reminded me of a point that has been outstanding for several years now.

The issue concerns the formal status of ISO/IEC 27001:2013 Annex A arising from ambiguities or conflicts in the main body wording and in the annex. 

Is Annex A advisory or mandatory? Are the controls listed in Annex A required by default, or optional, simply to be considered or taken into account?

The standard is distinctly ambiguous on this point, in fact there are direct conflicts within the wording - not good for a formal specification against which organizations are being audited and certified compliant.

Specifically, main body clause 6.1.3 Information security risk treatment clearly states as a note that "Organizations can design controls as required, or identify them from any source." ... which means they are not required to use Annex A.

So far so good .... however, the very next line of the standard requires them to "compare the controls determined in 6.1.3 b) above with those in Annex A and verify that no necessary controls have been omitted". This, to me, is a badly-worded suggestion to use Annex A as a checklist. Some readers may interpret it to mean that, by default, all the Annex A controls are "necessary", but (as I understand the position) that was not the intent of SC 27. Rather, "necessary" here refers to the organization's decision to treat some information risks by mitigating them using specific controls, or not. If the organization chooses to use certain controls, those controls are "necessary" for the organization, not mandatory for compliance with the standard.

To make matters worse still, a further note describes Annex A as "a comprehensive list of control objectives and controls", a patently false assertion. No list of control objectives and controls can possibly be totally comprehensive since that is an unbounded set. For starters, someone might invent a novel security control today, one that is not listed in the standard since it didn't exist when it was published. Also, there is a near-infinite variety of controls including variants and combinations of controls: it is literally impossible to identify them all, hence "comprehensive" is wrong.

The standard continues, further muddying the waters: "Control objectives are implicitly included in the controls chosen. The control objectives and controls listed in Annex A are not exhaustive and additional control objectives and controls may be needed." This directly contradicts the previous use of "comprehensive".

As if that's not bad enough already, the standard's description of the Statement of Applicability yet again confuses matters. "d) produce a Statement of Applicability that contains the necessary controls (see 6.1.3 b) and c)) and justification for inclusions, whether they are implemented or not, and the justification for exclusions of controls from Annex A". So, despite the earlier indication that Annex A is merely one of several possible checklists or sources of information about information security controls, the wording here strongly implies, again, that it is a definitive, perhaps even mandatory set after all.

Finally, Annex A creates yet more problems. It is identified as "Normative", a key word in ISO-land meaning "mandatory". Oh. And then several of the controls use the key word "shall", another word reserved for mandatory requirements in ISO-speak.

What a bloody mess!

Until this is resolved by wording changes in a future release of the standard, I suggest taking the following line:
  • Identify and examine/analyse/assess/evaluate your information risks;
  • Decide how to treat them (avoid, mitigate, share and/or accept);
  • Treat them however you like: it is YOUR decision, and you should be willing to justify your decision … but I generally recommend prioritizing and treating the most significant risks first and best, working systematically down towards the trivia where the consequences of failing to treat them so efficiently and effectively are of less concern;
  • For risks you decide to mitigate with controls, choose whatever controls suit your situation. Aside from Annex A, there are many other sources of potential controls, any of which might be more suitable and that’s fine: go right ahead and use whatever controls you believe mitigate your information risks, drawing from Annex A or advice from NIST, DHS, CSA, ISACA, a friend down the pub, this blog, whatever. It is your choice. Knock yerself out;
  • If they challenge your decisions, refer the certification auditors directly to the note under 6.3.1 b: Organizations can design controls as required, or identify them from any source. Stand your ground on that point and fight your corner. Despite the other ambiguities, I believe that note expresses what the majority of SC27 intended and understood. If the auditors are really stubborn, demonstrate why your controls are at least as effective or even better than those suggested in Annex A;
  • Perhaps declare the troublesome Annex A controls “Not applicable” because you prefer to use some other more appropriate control instead;
  • As a last resort, declare that the corresponding risks are acceptable, at least for now, pending updates to the standard and clearer, more useful advice;
  • Having supposedly treated the risks, check that the risk level remaining after treatment (“residual risk”) is acceptable, otherwise cycle back again, adjusting the risk treatment accordingly (e.g. additional or different controls).
If you are still uncertain about this, talk it through with your certification auditors – preferably keeping a written record of their guidance or ruling. If they are being unbelievably stubborn and unhelpful, find a different accredited certification body and/or complain about this to the accreditation body. You are the paying customer, after all, and it’s a free market!

Jul 12, 2018

NBlog July 12 - looking for inspiration

Over on the new CISSPforum, in a thread about helping our corporate colleagues understand what information security is all about, someone asked about raising awareness among the general public - specifically whether we might learn from how other industries explain fraud and abuse.


Well, they may not all concern fraud and abuse but there are loads of 'public awareness' activities going on all the time, some much more successful than others. Examples include:
  • Health and safety awareness (about the H&S legislation mostly)
  • Health awareness (with much broader objectives about living healthier lifestyles, getting fit, reducing obesity, not smoking etc.)
  • Illness awareness e.g. cancer, mental ill-health etc. (aiming to support sick people and get them to seek professional help ... such as the breast cancer awareness ad I'm hearing right now on NZ local radio)
  • Safety awareness (such as driving more carefully ... a n d   s l o w l y ... and preparing for various disasters)
  • Political awareness (promoting the policies and objectives of political parties)
  • Social awareness (mostly about or supporting 'disadvantaged' groups for various values and causes of disadvantage)
  • Marketing and advertising of products, branding And All That (by far the most widespread, creative and successful form of awareness, I'd argue)
  • Global awareness (on a wide range of global issues such as warming, poverty, trade, travel ...)
  • Business awareness (ranging from tax and other compliance stuff to good business practices)
  • Finance awareness (mostly marketing but some genuine efforts to help people manage their money and debts more effectively)
  • Life awareness a.k.a. the education system generally, not just skool
  • Trades and professions, with their courses and badges galore, plus codes of practice and so forth
  • Celebrity awareness (Kardashian-itis, Trump-itis ...)
  • Art awareness and appreciation
  • Science awareness and appreciation
  • Engineering awareness ....
  • More: over to you! What have I missed?
There is no shortage of examples varying widely in scope, focus, delivery methods, objectives and success ... which means a bewildering array of approaches to consider and perhaps adapt or simply apply in "our" field/s and context/s (for there are several of both).

Jul 1, 2018

NBlog July 1 - security frameworks awareness module released


The NoticeBored security awareness module for July concerns conceptual or architectural frameworks, standards, methods and good practices in the area of information risk and security – ‘security frameworks’ or ‘frameworks’ for short.

Both the organization and individual workers are obliged to comply with various rules concerning information security.  Some rules are imposed on us by external authorities in the form of laws and regulations, others we impose on ourselves through corporate policies and procedures, contracts etc. 

There are numerous laws and regulations relating to information security, far too many for us to cover in detail.  We can only talk in general terms. 

We face a similar practical constraint with corporate security policies, procedures etc.: we are not familiar with NoticeBored subscribers' policies, nor with their current internal compliance challenges. But the ‘policy pyramid’ is a near universal structure or framework, so the generalities apply again ... and for good measure we're supplying an updated suite of 71 security policy templates along with July's awareness content (the policies are sold separately too).



The module provides a sound platform or starting point to raise awareness of good security practices, frameworks and structured approaches. 


Next month we’ll move on to cover insider threats - threats originating within the organization from its employees, contractors, consultants, temps, interns and more.  August’s module will be simpler and more practical, less conceptual than July’s.

Jun 26, 2018

NBlog June 26 - critiquing NIST's Cyber Security Framework


Today in the final stages of preparing the awareness module on "Security frameworks", I'm thinking and writing about the NIST Cyber Security Framework (CSF). For awareness purposes, there's no point describing and elaborating on the CSF in great detail, but I need to read and evaluate it in order to sum it up and comment meaningfully for our subscribers. I'm investing my time and effort partly on their behalf, partly for my own education: I'm interested in infosec standards, keen to discover NIST's take on 'cyber security', and on the look out for good security practices.

So, indulge me for a moment as I talk you through the evaluation of just one small part of the CSF, specifically the core framework's advice on awareness and training (denoted "PR.AT", making it the prat section :-).
"The organization’s personnel and partners are provided cybersecurity awareness education and are trained to perform their cybersecurity related duties and responsibilities consistent with related policies, procedures, and agreements."
Reading that paragraph literally and narrowly, precisely who are "the organization's personnel and partners"? The "organization's personnel" are presumably its employees ... but that's a presumption. "Employee" is legally defined in at least some jurisdictions. Are temporary workers, interns and so on included in or excluded from that category? It depends.

"The organization's ... partners" is even less clear and more open to interpretation: various third parties may or may not be included in that category. Does it mean 'business partners' only, for example joint venture partners with binding contracts in place? Or suppliers and customers? Consultants? Contractors? Owners/stockholders? Assorted authorities? The general public and society at large? Current and former partners, perhaps future ones too (e.g. potential partners currently in negotiation)? 

Hmmm. There are many possible concerns at this stage for those (like me) who are anal enough to critique the wording. Many users of the CSF will not even notice these issues, or if they do will gloss-over them. Some may even actively exploit issues like these for their own advantage, or perhaps dismiss the entire CSF out of hand as "ambiguous and unhelpful".

The underlying issue I'm getting at here is common to most public security standards and advisories. There are several prospective audiences with a variety of expectations and interests, concerns and constraints. Most readers/users of the standards are not lawyers, and many are not trained or experienced in this area - which is precisely why some go looking to the standards for help. We all either seek or welcome easy answers, simple and elegant solutions to our immediate needs, without necessarily recognizing or accepting that the standards aren't written for us, personally. They are inevitably generalized or generic. They need to be interpreted, which in turn frees the authors from writing too narrowly and specifically but at the same time increases the risk of the standards becoming hand-waving, bland and unactionable. It's a fine line they tread.

NIST's approach in the CSF involves layered structures within the standards. The paragraph above is one of 23 in fact, called "categories" within 5 areas called "functions". The structure reflects a process view of cybersecurity, a timeline relative to the point an incident occurs. That's certainly not the only way to structure the CSF but, presumably, it suits their purpose and has the advantage of roughly even amounts of content in each part - an example of symmetry or balance that, for some obscure reason, seems to matter.

Moving further down into the structure, the 23 categories across 5 functions are supported by additional recommendations plus references to other standards, for example these support the awareness and training category:
"All users are informed and trained (CIS CSC 17, 18; COBIT 5 APO07.03, BAI05.07; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.7.2.2, A.12.2.1; NIST SP 800-53 Rev. 4 AT-2, PM-13)"
"Users" there presumably refers to IT users. "Informed and trained" is not the ultimate objective of awareness and training, but the process or mechanism used to achieve the (unstated) objective. While admirably succinct, notice the total lack of details about the form or nature of the awareness and training activities, their content and topics, motivation, frequency, reception etc. The reader is left to figure all that out for themselves, perhaps exploring those cited resources for further advice. 
"Privileged users understand their roles and responsibilities (CIS CSC 5, 17, 18; COBIT 5 APO07.02, DSS05.04, DSS06.03; ISA 62443-2-1:2009 4.3.2.4.2, 4.3.2.4.3; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2; NIST SP 800-53 Rev. 4 AT-3, PM-13)"
Again, readers have to interpret "privileged users" but at least this time the statement is somewhat closer to being an objective or intended outcome. 'Understanding' is helpful, yes, but doesn't achieve much in isolation unless people go on to comply with the requirements and fulfill the organization's expectations, which means behaving in certain ways, making sound decisions etc. The reader is left to flesh out all those unstated details. Easy enough for those of us who live and breathe this stuff, not so easy for readers who have come here for guidance.
"Third-party stakeholders (e.g.,suppliers, customers, partners) understand their roles and responsibilities (CIS CSC 17; COBIT 5 APO07.03, APO07.06, APO10.04, APO10.05; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.1, A.7.2.2; NIST SP 800-53 Rev. 4 PS-7, SA-9, SA-16)"
This statement takes even more interpretation. It's good of them to offer three examples of "third-party stakeholders", but there's no advice on those "roles and responsibilities" - no examples there. Given the context, the roles and responsibilities presumably relate in some way to cybersecurity, but what are they, even generally speaking? 
"Senior executives understand their roles and responsibilities (CIS CSC 17, 19; COBIT 5 EDM01.01, APO01.02, APO07.03; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2; NIST SP 800-53 Rev. 4 AT-3, PM-13)"
I have the same concerns here are with first supporting statement. Who are "senior executives"? Are senior, middle and junior managers excluded? What about team leaders, shift leaders, project managers and others? What are their roles and responsibilities, and is 'understanding' sufficient?
"Physical and cybersecurity personnel understand their roles and responsibilities (CIS CSC 17; COBIT 5 APO07.03; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2)"
Ditto. Let's make sure our 'physical personnel' understand what they're meant to be doing, eh?  :-)

Take another look at the overall PR.AT sentence though: notice there's no mention or supporting detail for the final clause "related policies, procedures, and agreements".

The are similar issues with the cited sources: they are all generic and fairly high-level, needing to be interpreted (within their own contexts plus the organizations using them) and applied sensibly. 

Summing up, the Cyber Security Framework, plus those other standards and methods cited by it and more besides, all need to be interpreted carefully and applied sensibly to have any real value to a given organization. They are skeletal, the bare bones: simply add flesh and bring to life.  If only it were that simple.

Jun 25, 2018

NBlog June 25 - ISO27k updates

Slogging away tediously for 3 full days, I've caught up with a 3-month backlog of emails from the ISO/IEC JTC 1/SC 27 committee, picking out and checking through all the ISO27k-related items and updating our website. It's a laborious process but worth it, I think, to keep up with developments, especially as the ISO27k standards will feature heavily in July's NoticeBored awareness module on security frameworks.

Here's a potted selection of news highlights on the ISO/IEC 27000-series standards:
  • 27001 (ISMS) is likely to see some changes in the wording around risks and opportunities, and the Statement of Applicability. Hopefully the end result will be an improvment!
  • The 27002 (controls) revision is starting to get to grips with reorganizing and tagging the information security controls. This is going to be a slog ... but at the end of it, there will be more flexibility for users of the standard, for example if you are auditing, reviewing or (re)designing the IT suite, it should be possible to pick out "all the preventive, physical security controls" without having to pore through the entire standard.
  • A stop-gap minor update to 27005 (risks) should surface later this year, at last, while work progresses on the full revision in parallel. 
  • 27034 (appsec) is falling into place: this multi-part standard describes a highly structured method for managing the information security controls within a software development function, with fascinating features such as proper architecture, specification, design, hardening, testing and parameterization of controls. Users of the standard are encouraged to invests in building inherently strong controls, then reaping the rewards by re-using those controls in multiple applications or situations - a fascinating approach, one that some organizations are already using. It works!
  • The development of 27045 (big data security) is just starting. I suspect 'big data' actually means 'complex IT systems' to the project team, rather than truly vast amounts of data, but I could be wrong. Either way, it is a brave move to develop security standards in this evolving area.
  • The fun continues with 27100 and others on "cybersecurity", particularly as none of the existing or developing ISO27k cyber standards adequately define the terms. The committee appears to be drifting vaguely towards the area of basic Internet security (despite that being adequately served by existing ISO27k standards), although some remain curiously obsessed with "the Cyberspace" (whatever that actually means: the formal definition is distinctly unhelpful and bears little relation to what most people think cyber is all about) while critical infrastructure protection against cyberwarfare (a dramatically different interpretation of cyber in government and defense) is poorly addressed within ISO27k.
  • IoT security standards are showing some signs of life. It's early days though involving lots of interaction with other committees and industry bodies actively developing the technology standards behind IoT.
  • Privacy and information security are quietly sliding closer together. A number of new ISO27k standards will cover privacy matters, and the committee is considering a change of name from "IT Security Techniques" to "Information Security and Privacy" (or possibly something to do with cybersecurity, perhaps "Protecting the Cyberspace"?!). There is a substantial overlap between these areas, not 100% though.
For more info on these and other ISO27k news items, please browse ISO27001security.com or contact your national standards body for details of the shadow-committee slaving away on SC 27 matters.

Jun 22, 2018

NBlog June 22 - critical of the critical infrastructure


A comment at the end of a piece in The Register about the safety aspects making it tricky to patch medical equipment caught my beady eye:
"Hospitals are now considered part of the critical national infrastructure in Israel and ought to be given the same status elsewhere".
Personally, I'm not entirely sure what being 'considered part of the critical national infrastructure' really means, in practice. It may well have specific implications in Israel or elsewhere, but I suspect that's just stuff and nonsense.

Those of you who don't work in hospitals, or in Israel, nor in critical national infrastructure industries and organizations, please don't dismiss this out of hand. Ultimately, we are all part of the global infrastructure known as human society, or wider still life on Earth but it is becoming increasingly obvious that we are materially harming the environment (= the Earth, our home) and if Space Force is real (not Space Farce) then even the sky's not the limit.

Within recent weeks on the Korean peninsula, the prospect of something 'going critical' has risen and receded, again. 'Nuff said.

Since we are all to some extent interdependent, we are all 'critical' in the sense of the butterfly effect within chaos theory. It is conceivable/vaguely possible that a seemingly trivial information security incident affecting a small apparently insignificant organization, or even an individual, could trigger something disastrous ... especially if we humans carry on building complex, highly interdependent, inherently unreliable, non-resilient, insecure information infrastructures, consistently glossing-over the fine details. 

I hear you. "It's OK, Gary, calm down. It just 'the cloud'. Don't you worry about that." But I'm paid to worry, or at least to think. As a knowledge worker, it's what I do.

Oh and by the way, not all critical infrastructure is global or national in scope. Some is organizational, even individual. I've just done the rounds feeding our animals, lit the fire and made a cup of tea, tending to my personal critical infrastructure.

So if we tag bits of various infrastructures critical, is that going to achieve a material change? No. It's just another label giving the appearance of having Done Something because, of course, Something Must Be Done. Unless it actually leads on to something positive, we are deluding ourselves, aren't we?

It's much the same bury-your-head-in-the-sand self-delusion as 'accepting' information risks. Having identified and analyzed the risks, having considered and rejected other treatments, we convince ourselves that the remaining risks are 'acceptable' and promptly park them out of sight, out of mind, as if they no longer exist. Hello! They are still risks! The corresponding incidents are just as likely and damaging as ever!

Whatever happened to security engineering? Is that in the clouds too? Or am I being too critical for my own good?

Happy Friday everyone. Have a good weekend. Keep taking the Pils.

Jun 21, 2018

NBlog June 21 - happy solstice!

10 o'clock this evening on June 21st is the Winter solstice for us down here in the Southern hemisphere. According to Wikipedia, we should be celebrating with "Festivals, spending time with loved ones, feasting, singing, dancing, fires". I lit the wood fire to warm the IsecT office before 8 this morning as usual. Having just fed the animals, I'm singing along to the radio as usual while I work. As to feasting, maybe we'll splash out on a special meal this weekend.

Up there on the Far Side, it's Midsommerfest which means festivals, spending time with loved ones, feasting, singing, dancing ... but no fires, hopefully. Most people are looking forward to summer holidays, I guess. We're looking forward to longer, warmer days and spring lambs, talking of which our Prime Minister is in hospital having a baby. It's OK though because we have a caretaker PM keeping an eye on things. The next few weeks will be interesting in NZ politics.

Jun 15, 2018

NBlog June 15 - parting messages

Advertisers know the value of a parting message at the end of an advertisement. It's something catchy to stick in the memory, reminding people about the advertisement or rather the messages the ad was meant to convey, generally concerning the brand rather than the specific product.

Making ads memorable is one thing: making them influential or effective is another. Some ads are memorable for the wrong reasons, annoying and intrusive rather than enticing and beneficial. However, one man's hot button is another's cancel/exit. Ads are usually targeted at audience segments or categories, as opposed to everyone, though, so don't be surprised that you hate some ads and love others.

Translating that approach to security awareness, the end of an awareness event is just as important as the start and the main body of the session. It’s your final chance to press home the key awareness messages and set people thinking about the session as they wander off. 

In the closing remarks at the end of your seminars, workshops, courses, management presentations etc., try these ideas for size: 

  1. Give a brief summary/recap of the main points;
  2. Specifically mention anything that noticeably resonated with the audience, created a stir, got people talking or made them laugh;
  3. Mention - or better still promote - further awareness and training events/sessions, policies, briefings etc. that attendees might enjoy;  
  4. Persuade attendees to put something from the session into action that very day or week;  
  5. Invite attendees to hang back and ‘have a quick word with you’, handling any further questions, issues, concerns, comments and (most valuable of all for you) feedback on the awareness session.   

Personally, I despise the formulaic approach often recommended for inexperienced presenters, namely "Tell 'em what you're going to tell 'em, tell 'em, then tell 'em what you told 'em". It is crude, manipulative and counterproductive, I feel ... but then I present a lot and don't like being too predictable. 

Another little tip is to front-load the session with the most important messages, if you can, especially if it is a long session or follows a lunch break. Catch their attention before they doze off, or better still keep them awake with your best possible performance. If you need to cover other stuff first, let them know that there's something big coming up later, and remind them again before you deliver it. Punctuate the session in some way as you move from segment-to-segment. I'll blog about punctuating and structuring sessions another time.

Jun 14, 2018

NBlog June 14 - metrics maturity metric, mmm


Given that measurement can both establish the facts and drive systematic improvement, I wonder whether I might develop a metric to measure organizations' approach to security metrics? 

Specifically, I have in mind a security metrics maturity metric (!). 

Immature organizations are likely to have few if any security metrics in place, with little appreciation of what they might be missing out on and little impetus to do anything about it. In short, they are absolutely rubbish at it.

Highly mature organizations, in contrast, will have a comprehensive, well-designed system of metrics that they are both actively using to manage their information risk and security, and actively refining to squeeze every last ounce of value from them. They are brilliant.

Those two outlines roughly describe the end points of a maturity scale, but what about those in the middle? What other aspects or features have I seen in my travels, what other characteristics are indicative of the maturity status?

Eating my own dogfood, before deciding on the Metric I should first have elaborated on the Goals of security metrics and the Questions arising (the GQM method). However, now, even with a maturity metric in mind, the same process of determining the Goals and Questions can help me work out the characteristics against which to assess organizations, the maturity Metric's measurement scale as it were.

Sorry if this is gibberish. I'm thinking aloud here, making lots of assumptions and skipping ahead while doing other stuff ... which I really ought to get on with, so I'll stop for now and pick this thread back up later on, unless I completely lose the plot.


Jun 12, 2018

NBlog June 12 - infosec priorities


I'm rapidly bringing myself back up to speed on information security frameworks for July's security awareness materials. Today, I've been updating my knowledge on the wide range of frameworks in this area, thinking about the variety of concepts, approaches and recommendations out there.

There are several space-frame models. For some reason presumably relating to our visual perception, they are almost always symmetrical, often triangular or pyramidical in shape such as the ICIIP (Institute for Critical Information Infrastructure Protection) one above, developed at the USC Marshall School of Business in Los Angeles. The ICIIP model caught my eye back in 2008 shortly before ISACA adopted it as BMIS (Business Model for Information Security).

Alternatively the shape might represent the magic number 3, or perhaps 9 (3 squared) counting the nodes and links of a triangular 'pyramid' (glossing over the fact that the ancient Egyptian pyramids have square bases and hence 5 faces, not 4).

Talking of numbers, the dreaded Pareto principle or Pareto rule or 80:20 or whatever the thumbnail MBA guides and assorted self-proclaimed experts are calling it this year, rears its ugly head in some of the advice on information security. Speaking as an infosec pro with a scientific background and an interest in security metrics, I am more than a little cynical about Pareto under any and all circumstances. It's a very vague rule of thumb, at best, derived and wildly extrapolated from, of all things, an observation about the distribution of incomes in England at the end of the 19th Century. I kid you not.

In the context of information risk and security, it's misleading in the extreme. To my mind, 80% secure is woefully short of good practice, no matter how you determine the percentage (which, conveniently, virtually nobody advising Pareto in this space is inclined to do). I totally accept that 100% security is literally unattainable but 80 - really? Well OK then, you might claim to be able to get to 80% of the required level of security with 20% of the controls, or effort, or investment, or whatever. I might equally counterclaim that the remaining 20% of security takes 150% of the effort, maybe 200%. The figures are pure bunkum, made up on the spot. All Pareto really tells us is that life is a shit sandwich, and we should focus on the Stuff That Matters - prioritize in other words. Gosh. 

Priorities interest me in relation to information security. We have a huge array of possibilities in the future, far too many to handle in fact. We can only realistically deal with some of them, rather few when it comes down to it. It is inevitable that we need to focus. Yes, I hear you, "Focus on the 20%"! Whatever. Focus is the point, not your fake mathematics. So what should we focus on? Here it gets fascinating.

In the ISO27k-land, we are advised to focus on the risks, the information risks (although they don't - yet - say so). "Tackle the big risks first and work your way down from there, reviewing and revising constantly as your approach to information (risk and) security management matures" we're told. Hmmm.

Some (including me, at times!) would argue that we need to prioritize on business value, taking account of the effectiveness and efficiency of our information security arrangements AND, ideally, the projected real costs of incidents involving information - meaning both the impact and probability parts of risk.

Splitting that apart, it is feasible to address some high-impact incidents particularly if you limit yourself in some manner to credible scenarios. That's what the Business Impact Analysis component of Business Continuity Management does, extremely well. Better than us infosec wonks, anyway. It's a tiny wee tweak to use the BIA results to prioritize preventive activities in information security, so wee in fact that nobody except us will probably even notice. Cool! That's a substantial tranche of our security strategy and next budget proposal in the bag already, courtesy of those nice BCM people.

Addressing high-probability incidents is more science than art: simply look at your incident metrics to find out what is really going on. 

Oh, hang on a moment, 'incident metrics' is an alien term for some, while incident reporting is, let's say, lackluster at best, even in a fairly mature and compliant organization. 

Now that's an issue we can address through security awareness. 

My case rests, m'lud. Get in touch to subscribe to NoticeBored, soon. If you get your act in gear, you will receive July's awareness materials on security frameworks, helping your organization close the loop on infosec priorities.