Welcome to the SecAware blog

I spy with my beady eye ...

25 May 2021

Stepping on the cracks

Anyone seeking information security standards or guidance is spoilt for choice e.g.:

Studying these is hard work. Aside from simply keeping up with developments all as they evolve in parallel, taking in their distinct perspectives on essentially the same area plus often subtle difference in their use of language consumes a lot of brain cycles

Naturally there is a lot in common since they all cover [parts of] information security. Commonality and consensus reinforces the conventional approaches of 'generally accepted good security practices', and fair enough. Personally, however, I am fascinated by the differences in their structures, emphasis and content, reflecting divergent purposes and scopes, authors, histories and cultures.

Some focus on the paving slabs. I'm looking out for the cracks.  

ISACA's COBIT, for instance, emphasizes the business angle (satisfying the organization's objectives), whereas various certification standards, laws and regs emphasize the formalities of specification and compliance, addressing societal aspects of information security. At the same time, privacy concerns the rights and expectations of the individual. Three different perspectives.

The recently-published ISO/IEC TS 27570 "Privacy guidelines for smart cities" neatly illustrates the creativity required to tackle new information risks arising from innovation in the realm of IoT, AI and short range data communications between the proliferating portable, wearable and mobile IT devices now roaming our city streets. Likewise with the ongoing efforts to develop infosec standards for smart homes and offices. 

There are opportunities as well as risks here: striking the right balance between them is crucial to the long term success of the technologies, suppliers and human society. Spotting opportunities and responding proactively with sound, generally-applicable advice is an area where standards can really help. It's not easy though.

24 May 2021

News on ISO/IEC 27002

Today I’ve slogged my way through a stack of ~50 ISO/IEC JTC1/SC27 emails, updating a few ISO27001security.com pages here and there on ongoing standards activities.

The most significant thing to report is that the project to revise the 3rd (2013) edition of ISO/IEC 27002 appears on-track to reach final draft stage soon and will hopefully be approved this year, then published soon after (during 2022, I guess).  

The standard is being extensively restructured and updated, collating and addressing about 300 pages of comments from the national standards bodies at every stage.  The editorial team are doing an amazing job!  

The new ‘27002 structure will have the controls divided into 4 broad categories or types i.e. technical, physical, people and ‘organizational’ [=other]:

For comparison, the standard is currently structured into 13 security domains:

‘27002 will nearly double in size, going from 90 to 160 pages or so, thanks to new controls and additional advice including areas such as cloud and IoT security.  Virtually all of the original controls have been retained but most have been reworded for the new structure and current practice … and there’s an appendix mapping the old clauses to the new. 

27001 Annex A is being updated to reflect the changes, and a new version of that standard is due to be published in the 2nd quarter of 2022.  

presume other standards based on ‘27002 (such as ‘27011 and ‘27799) will also be revised accordingly, at some point.

24 Apr 2021

Pre-shocks and after-shocks

Just a brief note today: it's a lovely sunny Saturday morning down here and I have Things To Do.

I'm currently enjoying another book by one of my favourite tech authors: Yossi Sheffi's The Resilient Enterprise*. As always, Yossi spins a good yarn, illustrating a strong and convincing argument with interesting, relevant examples leading to sound advice.

Specifically, I'm intrigued by the notion that major incidents/disasters leading to severe business disruption don't always come "out of the blue". Sometimes (often?), there are little warning signs, hints ahead of time about the impending crisis, little chances to look up from the daily grind and perhaps brace for impact. It ought to be possible to spot fragile supply chains, processes, systems and people, provided we are looking out for them ...   

Here in NZ at the moment, we are being treated to a public safety campaign using the analogy of meerkats, encouraging Kiwis to be constantly on the alert for signs of danger, thinking ahead and hopefully avoiding accidents rather than taking silly chances.  It makes sense. 

So I'm thinking perhaps we should update our template policies on incident reporting and/or incident management to encourage workers to report early warning signs, troubling concerns or situations early, before they turn into actual incidents (which also need to be reported, of course). It's a nice example of the value of security awareness.

* Less than ten bucks from Amazon in hardback, I see today. Even at full price, this book is a bargain, well worth it: now it's a steal! Grab it while it's hot!   

23 Apr 2021

KISS or optimise your ISO27k ISMS?

From time to time as we chat about scoping and designing Information Security Management Systems on the ISO27k Forum, someone naively suggests that we should Keep It Simple Stupid. After all, an ISO27k ISMS is, essentially, simply a structured, systematic approach for information risk management, isn't it?

At face value, then, KISS makes sense.

In practice, however, factors that complicate matters for organizations designing, implementing and using their ISMSs include different:

  • Business contexts – different organization sizes, structures, maturities, resources, experiences, resilience, adaptability, industries etc.;
  • Types and significances of risks – different threats, vulnerabilities and impacts, different potential incidents of concern;
  • Understandings of ‘information’, ‘risk’ and ‘management’ etc. – different goals/objectives, constraints and opportunities, even within a given organization/management team (and sometimes even within someone’s head!);
  • Perspectives: the bungee jumper, bungee supplier and onlookers have markedly different appreciations of the same risks;
  • Ways of structuring things within the specifications of ‘27001, since individual managers and management teams have the latitude to approach things differently, making unique decisions based on their understandings, prejudices, objectives and priorities, choosing between approaches according to what they believe is best for the organization (and themselves?) at each point;
  • Pressures, expectations and assumptions by third parties … including suppliers, partners and customers, certification auditors and specialists just like us … as well as by insiders;
  • Dynamics: we are all on constantly shifting sands, experiencing/coping with and hopefully learning from situations, near-misses and incidents, adapting and coping with change, doing our best to predict and prepare for uncertain futures.

As with computer applications and many other things, simplicity obviously has a number of benefits, whereas complexity has a number of costs. Not so obviously, the opposite also applies: things can be over simplified or overly complicated:

  • An over-simplified ISMS, if certifiable, will typically be scoped narrowly to manage a small subset of the organization's information risks (typically just its "cyber" risks, whatever that actually means), missing out on the added value that might be gained by managing a wider array of information risks in the same structured and systematic manner. A minimalist ISMS is likely to be relatively crude, perhaps little more than a paper tiger implemented purely for the sake of the compliance certificate rather than as a mechanism to manage information risks (an integrity failure?). Third parties who take an interest in the scope and other details of the ISMS may doubt the organization's commitment to information risk management, information security, governance, compliance etc., increasing their risks of relying on the certificate. There's more to this than ticking-the-box due diligence - accountability and compliance, for instance.
  • Conversely, an over-complicated ISMS may also be a paper tiger, this time a bureaucratic nightmare that bogs down the organization's recognition and response to information risks and incidents. It may take "forever" to get decisions made and implemented, outpaced by the ever-changing landscape of security threats and vulnerabilities, plus changes in the way the organization uses and depends on information. The ISMS is likely to be quite rigid and unresponsive - hardly a resilient, flexible or nimble approach. If the actual or perceived costs of operating the ISMS even vaguely approach the alleged benefits, guess what: managers are unlikely to support it fully, and will be looking hard for opportunities to cut funding, avoid further investment and generally bypass or undermine the red tape.

So, despite its superficial attraction, KISS involves either: ;

  • Addressing these and other complicating factors, which implies actively managing them in the course of designing, using and maintaining the ISMS, and accepting that simplicity per se may not be a sensible design goal; or
  • Ignoring them, pretending they don't exist or don't matter, turning a blind ear to them and hoping for the best.
Paradoxically, it is quite complicated and difficult to keep things simple! There are clearly several aspects to this, some that are very tough to ‘manage’ or ‘control’ and many that are interrelated.

I'm hinting at information risks associated with the governance, design and operation of an ISMS - information risks that can be addressed in the conventional manner, meaning whatever convention/s you prefer, perhaps the ISO27k approach, so (using this situation as a worked example) what does that entail?

  1. Establish context: for the purposes of the blog, the scope of this illustrative risk assessment is the design and governance of an ISMS, in the context of any organization setting out to apply ISO/IEC 27001 from scratch or reconsidering its approach for some reason (perhaps having just read something provocative on a blog ...).

  2. Identify viable information risks: I've given you a head start on that, above. With sufficient head-scratching, you can probably think of others, either variants/refinements of those I have noted or risks I have missed altogether. To get the most out of this exercise, don't skip this step. It's a chance to practice one of the trickier parts of information risk management.

  3. Analyze the risks: this step involves exploring the identified risks in more depth to gain a better understanding/appreciation of them. I've been 'analyzing' the risks informally as I identified and named them ... but you might like to think about them, perhaps consider the threats, vulnerabilities, potential incidents and the associated impacts. For example, what are the practical implications of an over-simplified or over-complicated ISMS? What are the advantages of getting it just right? How much latitude is there in that? Which are the most important aspects, the bits that must be done well, as opposed to those that don't really matter as much?
  4. Evaluate the risks: my personal preference is to draw up a PIG - a Probability vs. Impact Graph - then place each of the risks on the chart area according to your analysis and understanding of them on those two scales, relative to each other. Alternatively, I might just rank them linearly. If you prefer some other means of evaluating them, fine, go ahead. The real point is to get a handle on the risks, ideally quantifying them to help decide what, if anything, needs to be done about them, and how soon it ought to be done (i.e. priorities).

  5. Treat the risks has at least two distinct steps: (5a) decide what to do, then (5b) do it. Supplementary activities may include justifying, planning, gaining authorization for and seeking resources to undertake the risk treatments, plus various management, monitoring and assurance activities to make sure things go to plan - and these extras are, themselves, risk-related. "Critical" controls typically deserve more focus and attention than relatively minor ones, for instance. Gaining sufficient assurance that critical controls are, in fact, working properly, and remain effective, is an oft-neglected step, in my experience.

  6. Communicate: the written and spoken words, notes, diagrams, PIGs, priority lists, control proposals, plans etc. produced in the course of this effort are handy for explaining what was done, what the thinking behind it was, and what was the outcome. It's worth a moment to figure out who needs to know about this stuff, what are the key messages, and where appropriate how to gain engagement or involvement with the ISMS work. There are yet more information risks in this area, too e.g. providing inaccurate, misleading or out of date information, communicating ineptly with the wrong people, and perhaps disclosing sensitive matters inappropriately.

  7. Monitor and review the risks, risk treatments etc. is (or rather, should be!) an integral part of managing the ISMS design and implementation project, and a routine part of governance and management once the ISMS is operational. The ISMS management reviews, internal audits and external/certification audits are clear examples of techniques to monitor and review, with the the aim of identifying and dealing with any issues that arise, exploiting opportunities to improve and mature, and generally driving out the business value achieved by the ISMS. For me, ISMS metrics are an important part of this, and once more there are risks relating to measuring the wrong things, or measuring things wrong.
So, there we have it. You may still feel that KISS is the obvious way to go, and good luck if you do. Personally, I believe I can improve on KISS to design an optimal ISMS that best satisfies the organization's business objectives, generating greater value. Would you like to put me to the test? Do get in touch: I'm sure I'll enjoy advising you ... at my usual bargain rate!

19 Apr 2021

Policy development process: phase 2

Today we completed and published a new "topic-specific" information security policy template on clear desk and screen.

Having previously considered information risks within the policy scope, writing the policy involved determining how to treat the risks and hence what information security or other controls are most appropriate.  

Here we drew on guidance from the ISO27k standards, plus other standards, advisories and good practices that we've picked up in the course of ~30 years in the field, working with a variety of industries and organizations - and that's an interesting part of the challenge of developing generic policy templates. Different organizations - even different business units, departments, offices or teams within a given organization - can take markedly different attitudes towards clear desk and screen. The most paranoid are obsessive about it, mandating controls that would be excessive and inappropriate for most others. Conversely, some are decidedly lax, to the point that information is (to my mind) distinctly and unnecessarily vulnerable to deliberate and accidental threats. We've picked out controls that we feel are commonplace, cost-effective and hence sensible for most organizations.

COVID19 raises another concern, namely how the risks and controls in this area vary between home offices or other non-corporate 'working from home' workplaces, compared to typical corporate offices and other workplaces. The variety of situations makes it tricky to develop a brief, general policy without delving into all the possibilities and specifics. The approach we've taken is to mention this aspect and recommend just a few key controls, hoping that workers will get the point. Customers can always customise the policy templates, for example adding explicit restrictions for particular types of information, relaxing things under certain conditions, or beefing-up the monitoring, oversight and compliance controls that accompany the policies - which is yet another complicating factor: the business context for information security policies goes beyond the written words into how they are used and mandated in practice.

Doing all of this in a way that condenses the topic to just a few pages of good practice guidance, well-written in a motivational yet generic manner, and forms a valuable part of the SecAware policy suite, explains the hours we've sunk into the research and writing. Let's hope it's a best seller!



13 Apr 2021

Policy development process: phase 1

On Sunday I blogged about preparing four new 'topic-specific' information security policy templates for SecAware. Today I'm writing about the process of preparing a policy template.

First of all, the fact that I have four titles means I already have a rough idea of what the policies are going to cover (yes, there's a phase zero). 'Capacity and performance management', for instance, is one requested by a customer - and fair enough. As I said on Sunday, this is a legitimate information risk and security issue with implications for confidentiality and integrity as well as the obvious availability of information. In my professional opinion, the issue is sufficiently significant to justify senior management's concern, engagement and consideration (at least). Formulating and drafting a policy is one way to crystallise the topic in a form that can be discussed by management, hopefully leading to decisions about what the organisation should do. It's a prompt to action.

At this phase in the drafting process, I am focused on explaining things to senior management in such a way that they understand the topic area, take an interest, think about it, and accept that it is worth determining rules in this area. The most direct way I know of gaining their understanding and interest is to describe the matter 'in business terms'. Why does 'capacity and performance management' matter to the business? What are the strategic and operational implications? More specifically, what are the associated information risks? What kinds of incident involving inadequate capacity and performance can adversely affect the organization?

Answering such questions is quite tough for generic policy templates lacking the specific business context of a given organisation or industry, so we encourage customers to customise the policy materials to suit their situations. For instance:

  • An IT/cloud service company would probably emphasise the need to maintain adequate IT capacity and performance for its clients and for its own business operations, elaborating on the associated IT/cyber risks.
  • A healthcare company could mention health-related risk examples where delays in furnishing critical information to the workers who need it could jeopardise treatments and critical care.
  • A small business might point out the risks to availability of its key workers, and the business implications of losing its people (and their invaluable knowledge and experience i.e. information assets) due to illness/disease, resignation or retirement. COVID is a very topical illustration.
  • An accountancy or law firm could focus on avoiding issues caused by late or incomplete information - perhaps even discussing the delicate balance between those two aspects (e.g. there are business situations where timeliness trumps accuracy, and vice versa).

The policy templates briefly discuss general risks and fundamental principles in order to orient customers in the conceptual space, stimulating them (we hope) to think of situations or scenarios that are relevant to their organisations, their businesses or industries, and hence to their management.

'Briefly' is an important point: the discussion in this blog piece is already lengthier and more involved than would be appropriate for the background or introductory section of a typical policy template. It's easy for someone as passionate and opinionated as me to waffle-on around the policy topic area, not so easy to write succinctly and remain focused ... which makes policy development a surprisingly slow, laborious and hence costly process, given that the finished article may be only 3 or 4 pages. It's not simply a matter of wordsmithing: distilling any topic down to its essentials takes research and consideration. What must be included, and what can we afford to leave out? Which specific angles will stimulate senior managers to understand and accept the premise that 'something must be done'?

OK, that's it for today. Must press on - policy templates to write! I'll expand on the next phase of the policy development process soon - namely, how we flesh out the 'something that must be done' into explicit policy statements.

11 Apr 2021

Infosec policy development

We're currently preparing some new information risk and security policies for SecAware.com.  It's hard to find gaps in the suite of ~80 policy templates already on sale (!) but we're working on these four additions:

  1. Capacity and performance management: usually, an organization's capacity for information processing is managed by specialists in IT and HR.  They help general management optimise and stay on top of information processing performance too.  If capacity is insufficient and/or performance drops, that obviously affects the availability of information ... but it can harm the quality/integrity and may lead to changes that compromise confidentiality, making this an information security issue.  The controls in this policy will include engineering, performance monitoring, analysis/projection and flexibility, with the aim of increasing the organisation's resilience. It's not quite as simple as 'moving to the cloud', although that may be part of the approach.

  2. Information transfer: disclosing/sharing information with, and obtaining information from, third party organisations and individuals is so commonplace, so routine, that we rarely even think about it.  This policy will outline the associated information risks, mitigating controls and other relevant approaches.

  3. Vulnerability disclosure: what should the organisation do if someone notifies it of vulnerabilities or other issues in its information systems, websites, apps and processes? Should there be mechanisms in place to facilitate, even encourage notification? How should issues be addressed?  How does this relate to penetration testing, incident management and assurance?  Lots of questions to get our teeth into!

  4. Clear desks and screens: this is such a basic, self-evident information security issue that it hardly seems worth formulating a policy. However, in the absence of policy and with no 'official' guidance, some workers may not appreciate the issue or may be too lazy/careless to do the right thing. These days, with so many people working from home, the management oversight and peer pressure typical in corporate office settings are weak or non-existent, so maybe it is worth strengthening the controls by reminding workers to tidy up their workplaces and log off.  It's banale, not hard! 
The next release of ISO/IEC 27002 will call these "topic-specific information security policies" focusing on particular issues and/or groups of people in some detail, whereas the organisation's "information security policy" is an overarching, general, high-level framework laying out (among other things) the fundamental principles. Our corporate information security policy template is a mature product that already includes a set of principles, so it may not need changes to comply with the updated ISO/IEC 27002 when published later this year or early next ... but we'll seize the opportunity to review it anyway. 

11 Mar 2021

NBlog Mar 11 - book review on "Cyber Strategy"

Cyber Strategy

Risk-driven Security and Resiliency

Authors: Carol A. Siegel and Mark Sweeney

Publisher: Auerbach/CRC Press

ISBN: 978-0-367-45817-1

Price: ~US$100 + shipping from Amazon


This book lays out a systematic process for developing corporate strategy in the area of cyber (meaning IT) security and resilience.  


  • An in-depth exposition on an extremely important topic
  • It emphasises risks to the business, to its information, and to its IT systems and networks, in that order
  • Systematic, well structured and well written, making it readable despite the fairly intense subject matter
  • Lots of diagrams, example reports and checklists to help put the ideas into action
  • Treating strategy development as a discrete project is an intriguing approach


  • Describes a fairly laborious, costly and inflexible approach, if taken literally and followed STEP-by-STEP
  • Implies a large corporate setting, with entire departments of professionals specializing and willing to perform or help out in various areas 
  • A little dogmatic: alternative approaches are not only possible but sufficient, appropriate or even better under various circumstances, but strategic options and choices are seldom mentioned
  • As described, the strategy planning horizon is very short
  • A defensive risk-averse strategic approach is implied, whereas more proactive, even offensive strategies can take things in a different direction: sometimes risks should not just be accepted but relished!
  • Little mention of architectural approaches e.g. business, information and IT architectures with risk and security implications and opportunities


Despite being described as a sequence of six STEPS (all in capitals, for some reason), there are of course way more than six activities to perform, and some are parallel or overlapping rather than sequential.

Reading, thinking about and implementing the ideas in this book should result in a soundly-constructed cyber strategy, generating far more value than the book's purchase price.  However, studying a book, even one as well-written as this one, is not sufficient to turn just anyone into a cyber strategist!  This stuff is hard.  The book makes it a little easier.

10 Jan 2021

Y2k + 20: risk, COVID and "the Internet issue"

It feels like 'just the other day' to me but do you recall "Y2k" and all that? 

Some of you reading this weren't even born back then, so here's a brief, biased and somewhat cynical recap.

For a long time prior to the year 2000, a significant number of software programmers had taken the same shortcut we all did back in "the 90s". Year values were often coded with just two decimal digits: 97, 98, 99 ... then 00, "coming ready or not!".

"Oh Oh" you could say. "OOps".

When year counters went around the clock and reset to zero, simplistic arithmetic operations (such as calculating when something last happened, or should next occur) would fail causing ... well, potentially causing issues, in some cases far more significant than others.

Failing coke can dispensers and the appropriately-named Hornby Dublo train sets we could have coped with but, trust me, you wouldn't want your heart pacemaker, new fangled fly-by-wire plane or the global air traffic control system to decide that it had to pack up instantly because it was nearly 100 years past its certified safe lifetime. Power grids, water and sewerage systems, transportation signalling, all manner of communications, financial, commercial and governmental services could all have fallen in a heap if the Y2k problems wasn't resolved in time, and this was one IT project with a hard, immutable deadline, at a time when IT project slippage was expected, almost obligatory. 

Tongue-in-cheek suggestions that we might shimmy smoothly into January 1st [19]9A were geekly-amusing but totally impracticable. 

In risk terms, the probability of Y2k incidents approached 100% certain and the personal or societal impacts could have been catastrophic under various credible scenarios - if (again) the Y2k monster wasn't slain before the new year's fireworks went off ... and, yes, those fancy public fireworks display automated ignition systems had Y2k failure modes too, along with the fire and emergency dispatch systems and vehicles. The combination of very high probability and catastrophic impact results in a risk up at the high end of a tall scale. 

So, egged-on by information security pro's and IT auditors (me, for instance), management took the risk seriously and invested significant resources into solving "the Y2k issue". 

Did you spot the subtle shift from "Y2k" to "the Y2k issue"? I'll circle back to that in just a moment. 

Individual Y2k programming updates were relatively straightforward on the whole with some interesting exceptions, mostly due to prehistoric IT systems still in use well past their best-before dates, with insurmountable hardware, software and wetware limitations. The sheer overwhelming scale of the Y2k problem was the real issue through. Simply finding all those IT systems was an enormous global challenge, let alone testing and where necessary fixing or replacing them all. The world discovered, during '98 and '99 (there I go again!) that rather few "computers" were as obvious as the beige boxes proliferating on desktops at the time, nor even the massive machines humming away in air conditioned sanctuaries known as "the mainframe". Counting the blue IBM labels was no longer considered an adequate form of computer stock-taking. Computers and chips were "everywhere", often embedded in places that were never intended or designed to be opened once sealed in place. It was almost as if they had been deliberately hidden. Conspiracy theories proliferated almost as fast as Y2k jokes. 

Flip forward 20 years and we see similar horrors unfolding today in the form of myriad IoT things and 'the cloud', so indistinct and unclear that people long since gave up trying to draw meaningful network diagrams - only now the year encoding aspect is the least of our security problems. But I digress. Back to the plot.

From what I saw, for reasons of expediency and ignorance, the general solution to "the Y2k problem" was to treat the superficial symptoms of an underlying disease that we still suffer today. We found and corrected Y2k issues in software. I believe the world as a whole missed a golden opportunity to change our software design, development, testing and maintenance processes to prevent Y2k-like issues ever arising again. Oh sure, some organizations implemented policies on date encoding, and presumably some were far-sighted enough to generalise the issue to all counters and maybe coding shortcuts etc. but, on the whole, we were far too busy baling out the hold to worry about where the ship was heading. Particularly during 99, we were in crisis mode, big time. I remember. I was there.

Instead of thinking of the Y2k work as an investment for a better future, it was treated as a necessary expense, a sunk cost. If you don't believe me, just ask to see your organisation's inventory containing pertinent details of every single IT device - the manufacturers, models, serial numbers, software and firmware revisions, latest test status, remediation/replacement plans and so on. We had all that back in 99. Oh wait, you have one? Really? So tell me, when was it last updated? How do you know, for sure, that it is reasonably comprehensive and accurate? Go ahead, show me the associated risk profiles and documented security architectures. Tell me about the IT devices used in your entire supply network, in your critical infrastructure, in everything your organisation depends upon. 

Make my day.

Even the government and defence industries would be very hard pressed to demonstrate leadership in this area.  

That's not all. Following widespread relief that January 1st 2000 had not turned out to be a cataclysmic global disaster, we slipped into a lull and all too soon "the Y2k problem" was being portrayed in the media as "the Y2k debacle". Even today, two decades on, some pundits remain adamant that the whole thing was fake news created by the IT industry to fleece customers of money.

It was a no-win situation for the IT industry: if things had gone horribly wrong, IT would definitely have copped the blame. Despite the enormous amount of hard work and expense to ensure that things did not go horribly wrong, IT still cops the blame. 

Hey, welcome to the life of every information risk and security professional! If we do our jobs well, all manner of horribly costly and disruptive incidents are prevented ... which leaves our organisations, management and society at large asking themselves "What have the infosec pros ever done for us? OK, apart from identifying, and evaluating, and treating information risks ...".

For what it's worth, I'm very happy to acknowledge the effort that went into mounting an almost unbelievably successful Y2k rescue mission - and yet, at the same time, we were saved from a disaster of our own making, a sorry tale from history that we are destined to repeat unless things change.

As I mentioned, two major areas of risk have come to the fore in the past decade, namely the information risks associated with IoT and cloud computing. They are both global in scope and potentially disastrous in nature, and worse still they are both linked through the Internet - the big daddy of all information risks facing the planet right now. 

The sheer scale of the Internet problem is the real issue. Simply finding all those Internet connections and dependencies is an enormous global challenge, let alone testing and where necessary securing or isolating them all.

You do have a comprehensive, risk-assessed, supply-chain-end-to-end inventory of all your Internet dependencies, including everyone now working from home under COVID lockdown, right? Yeah, right.

If you don't see the parallel with Y2k, then you really aren't looking ... and that's another thing: how come "the Internet issue|problem|risk|crisis ..." isn't all over the news?

Yes, obviously I appreciate that COVID19 is dominating the headlines, another global incident with massive impacts. The probability and impact of global pandemics has been increasing steadily for decades in line with the ascendance of global travel, increasing mobility and cultural blending. Although the risk was known, we failed to prevent a major incident ... and yet, strangely, the health industry isn't in the firing line, possibly because we are utterly dependent on them to dig us out of the cesspit, despite the very real personal risks they face every day. They are heroes. IT and infosec pro's aren't. I get it. Too bad.

OK, that's enough of a rant for today. I will expand on "the Internet issue|problem|risk|crisis" in a future episode. Meanwhile, I'll click the Publish button in just a moment, while it still works.

15 Nov 2020

NBlog Nov 15 - the trouble with dropping controls

I literally don’t understand a question that came up on the ISO27k Forum this week. A member asked:

‘Should a control be discontinued because a reassessment showed a lower acceptable risk score?’ 

I find it interesting to pick apart the question to explore the reasons why I don't understand it, and the implications. See what you think ... 

  • Any control may legitimately be ‘discontinued’ (removed, unimplemented, retired, replaced, modified etc.) provided that change has been duly thought-through, assessed, justified, and deemed appropriate for whatever reasons. It may be important, though, to be reasonably certain that discontinuation is, in fact, in the best interests of the organization, and that’s often hard to determine as controls can be quite complex in themselves, and are part of a highly complex ‘control environment’. A seemingly trivial, unimportant, even redundant control (such as an alert) might turn out to be critical under specific circumstances (where other alerts fail, or were accidentally disabled, or were actively and deliberately bypassed by an attacker or fraudster). So, it may be preferable to ‘suspend’ the control for a while, pending a review to determine what the effects truly are … since it is probably easier and quicker to reinstate a ‘suspended’ control if needs be, than it would have been if the control was completely removed and trashed. A dubious firewall  rule, for example, might be set to 'warn and log only', rather than simply being dropped from the ruleset, the reverse of how new firewall rules can be introduced.  On the other hand, a control that is patently failing, clearly not justifying its existence, is a strong candidate to be removed … and potentially replaced by something better (which opens a whole new topic).

  • A ‘reassessment’ might be a reassessment of the risks, the control, the control effectiveness, the business situation, the compliance obligations/expectations, the alternatives and supporting/compensating controls, or something else:  ‘reassessment’ is a very vague term.  It might mean anything on the range from ‘someone changed their mind’ to ‘a full independent investigation was launched, producing a lengthy report that formally discussed all the options including a recommendation to remove the control, which the management body duly considered and authorized, with various caveats or controls around the way it was to be done …’!

  • ‘Lower acceptable risk’ might mean ‘We reduced our risk acceptance level’ but that’s ambiguous – it could mean that you are accepting a lower level of risk than before (management is more risk-averse) or the polar opposite i.e. the level of risk that can be accepted has been reduced (management is more risk-tolerant)!  More likely, the member who posed the question simply missed a comma, intending to say ‘a lower, acceptable risk score’ suggesting that he have decided the risk does not warrant retaining the control, hence ‘discontinuation’ is an option to be considered, as already discussed. 

  • ‘Risk score’ hints at yet another potential minefield - one I've discussed repeatedly here on NBlog. How are risks being ‘scored’, exactly? How certain are you that a reduction in the score genuinely reflects a reduction in the risk? If you are totally happy with your risk evaluation and scoring process, why has this question even arisen? If you have some doubts or concerns about the process, discontinuation of a control may not be a sensible approach without additional assurance and assessment, and perhaps the ability to reinstate the control efficiently if it turns out to be needed after all.

  • More generally, removal of, or deliberate decisions not to implement, controls can be a challenging, problematic concept for risk-averse information security professionals. We are naturally biased towards risk reduction through controls. It’s an inherent part of our mind-set, a default approach.  The rest of the world does not necessarily think the same way! To ‘a level-headed business person’, controls may be perceived as costly constraints on business … which means they need to be justified, appropriate and necessary, and worth having i.e. they have a positive net value to the business (benefits less costs, ideally taking full account of ALL the benefits and ALL the costs). Ineffective controls, then, have a negative net value (no benefits, only costs) and are clearly candidates for removal … but removing controls is itself an activity that has risks, costs and benefits too.

That's a confusion of complexity and doubts arising from such a short question! Am I seriously over-thinking it? Well, yes, maybe I am. Still, it amuses me to exercise my grey matter, and I hope I've stimulated you to dig a little deeper when you see a question that furrows your brow. I've said before that some of the most insightful discussion threads on ISO27k Forum arise from seemingly naïve or trivial questions that might easily have been overlooked.

PS  Sorry for the lack of NBloggings lately - too busy with/engrossed in work, which is A Good Thing.