From time to time as we chat about scoping and designing Information Security Management Systems on the ISO27k Forum, someone naively suggests that we should Keep It Simple Stupid. After all, an ISO27k ISMS is, essentially, simply a way of managing information security, isn't it?
At face value, then, KISS makes sense.
In practice, however, factors that complicate matters for organizations designing, implementing and using their ISMSs include different:
- Business contexts – different organization sizes, structures, maturities, resources, experiences, resilience, adaptability, industries etc.;
- Types and significances of risks – different threats, vulnerabilities and impacts, different potential incidents of concern;
- Understandings of ‘information’, ‘risk’ and ‘management’ etc. – different goals/objectives, constraints and opportunities, even within a given organization/management team (and sometimes even within someone’s head!);
- Perspectives: the bungee jumper, bungee supplier and onlookers have markedly different appreciations of the same risks;
- Ways of structuring things within the specifications of ‘27001, since individual managers and management teams have the latitude to approach things differently, making unique decisions based on their understandings, prejudices, objectives and priorities, choosing between approaches according to what they believe is best for the organization (and themselves?) at each point;
- Pressures, expectations and assumptions by third parties … including suppliers, partners and customers, certification auditors and specialists just like us … as well as by insiders;
- Dynamics: we are all on constantly shifting sands, experiencing/coping with and hopefully learning from situations, near-misses and incidents, adapting and coping with change, doing our best to predict and prepare for uncertain futures.
As with computer applications and many other things, simplicity obviously has a number of benefits, whereas complexity has a number of costs. Not so obviously, the opposite also applies: things can be over simplified or overly complicated:
- An over-simplified ISMS, if certifiable, will typically be scoped narrowly to manage a small subset of the organization's information risks (typically just its "cyber" risks, whatever that actually means), missing out on the added value that might be gained by managing a wider array of information risks in the same structured and systematic manner. A minimalist ISMS is likely to be relatively crude, perhaps little more than a paper tiger implemented purely for the sake of the compliance certificate rather than as a mechanism to manage information risks (an integrity failure?). Third parties who take an interest in the scope and other details of the ISMS may doubt the organization's commitment to information risk management, information security, governance, compliance etc., increasing their risks of relying on the certificate. There's more to this than ticking-the-box due diligence - accountability and compliance, for instance.
- Conversely, an over-complicated ISMS may also be a paper tiger, this time a bureaucratic nightmare that bogs down the organization's recognition and response to information risks and incidents. It may take "forever" to get decisions made and implemented, outpaced by the ever-changing landscape of security threats and vulnerabilities, plus changes in the way the organization uses and depends on information. The ISMS is likely to be quite rigid and unresponsive - hardly a resilient, flexible or nimble approach. If the actual or perceived costs of operating the ISMS even vaguely approach the alleged benefits, guess what: managers are unlikely to support it fully, and will be looking hard for opportunities to cut funding, avoid further investment and generally bypass or undermine the red tape.
So, despite its superficial attraction, KISS involves either:
- Addressing these and other complicating factors, which implies actively managing them in the course of designing, using and maintaining the ISMS, and accepting that simplicity per se may not be a sensible design goal; or
- Ignoring them, pretending they don't exist or don't matter, turning a blind ear to them and hoping for the best.
- for the purposes of the blog, the scope of this illustrative risk assessment is the design and governance of an ISMS, in the context of any organization setting out to apply ISO/IEC 27001 from scratch or reconsidering its approach for some reason (perhaps having just read something provocative on a blog ...).
- Identify viable information risks: I've given you a head start on that, above. With sufficient head-scratching, you can probably think of others, either variants/refinements of those I have noted or risks I have missed altogether. To get the most out of this exercise, don't skip this step. It's a chance to practice one of the trickier parts of information risk management.
- Analyze the risks: this step involves exploring the identified risks in more depth to gain a better understanding/appreciation of them. I've been 'analyzing' the risks informally as I identified and named them ... but you might like to think about them, perhaps consider the threats, vulnerabilities, potential incidents and the associated impacts. For example, what are the practical implications of an over-simplified or over-complicated ISMS? What are the advantages of getting it just right? How much latitude is there in that? Which are the most important aspects, the bits that must be done well, as opposed to those that don't really matter as much?
- Evaluate the risks: my personal preference is to draw up a PIG - a Probability vs. Impact Graph - then place each of the risks on the chart area according to your analysis and understanding of them on those two scales, relative to each other. Alternatively, I might just rank them linearly. If you prefer some other means of evaluating them (FAIR for example), fine, go ahead, knock yourself out. The real point is to get a handle on the risks, ideally quantifying them to help decide what, if anything, needs to be done about them, and how soon it ought to be done (i.e. priorities).
- Treat the risks has at least two distinct steps: (5a) decide what to do, then (5b) do it. Supplementary activities may include justifying, planning, gaining authorization for and seeking resources to undertake the risk treatments, plus various management, monitoring and assurance activities to make sure things go to plan - and these extras are, themselves, risk-related. "Critical" controls typically deserve more focus and attention than relatively minor ones, for instance. Gaining sufficient assurance that critical controls are, in fact, working properly, and remain effective, is an oft-neglected step, in my experience.
- Communicate: the written and spoken words, notes, diagrams, PIGs, priority lists, control proposals, plans etc. produced in the course of this effort are handy for explaining what was done, what the thinking behind it was, and what was the outcome. It's worth a moment to figure out who needs to know about this stuff, what are the key messages, and where appropriate how to gain engagement or involvement with the ISMS work. There are yet more information risks in this area, too e.g. providing inaccurate, misleading or out of date information, communicating ineptly with the wrong people, and perhaps disclosing sensitive matters inappropriately.
- Monitor and review the risks, risk treatments etc. is (or rather, should be!) an integral part of managing the ISMS design and implementation project, and a routine part of governance and management once the ISMS is operational. The ISMS management reviews, internal audits and external/certification audits are clear examples of techniques to monitor and review, with the the aim of identifying and dealing with any issues that arise, exploiting opportunities to improve and mature, and generally driving out the business value achieved by the ISMS. For me, ISMS metrics are an important part of this, and once more there are risks relating to measuring the wrong things, or measuring things wrong.