Welcome to the SecAware blog

I spy with my beady eye ...

29 Jul 2021

Pinball management

It could be argued that ‘management’ of all kinds (including information risk and security management) is or rather should be a rational process, meaning that managers should systematically gather and evaluate information, take account of sound advice, make sensible decisions, put in place whatever is necessary to implement the decisions etc., all the time acting in the organization's best interests, furthering its business objectives, strategies, policies etc.

In practice, there are all manner of issues with that approach that complicate matters, frustrate things, and lead to ‘suboptimal’ situations that may be - or at least appear to be - irrational, inappropriate or unnecessary. 

In particular, there are numerous paradoxes. For examples:

  • The obvious core objective of a typical commercial company to make a substantial profit for its owners may conflict with various ethical and legal objectives to spend money on protecting and furthering the wider interests of society and individuals - including their privacy. 
  • There's a fine line between motivating/supporting/encouraging/directing and demotivating/micro-managing/exploiting employees. 
  • Efficiency in most matters comes at the cost of effectiveness, and vice versa. They say quality is free, but is that a lie? 
  • Locking secrets or other valuables in a vault limits their utility and hence practical value, but releasing them puts them at greater risk of theft and illegitimate exploitation.
  • There is literally no end of potential investment opportunities, but finite resources to invest, plus unavoidable costs of simply being in business.
  • Bonuses may be achieved selfishly in the short term by sacrificing the long game, presenting social and ethical challenges that are difficult to counter. 

Faced with all that and more, it occurs to me that corporate management is a bit like pinball. Managers are:

  • Identifying and hopefully hitting the targets that score points while simultaneously avoiding various static and dynamic hazards, some of which come out of left field;
  • Using and refining whatever techniques and resources are available, perhaps nudging the table tentatively or finally getting the hang of that cool ball-spinning back-flip maneuver;
  • Coping bravely with the challenges and setbacks, while also creating/engineering and taking advantage of opportunities that arise along the way.
As with the pinball table in play, there’s a lot going on in and around any organization, of any size. [Senior] management’s high-level perspective and involvement extends across the entire enterprise, while most individual [mid-level and junior] managers tend to be focused on and able to deal with just part of it, and staff are mostly heads-down, slogging at the coal face, creating actual value: it’s a team effort.

Experienced managers appreciate that things don't always go to plan. Where possible, they prefer to retain their options and flexibility as long as practicable, and yet making real progress on almost anything requires commitment and decisive action, collapsing those options to a much smaller subset.  

11 Jul 2021

Managing certainty

'Reducing uncertainty' is the prime focus of  information risk management today. We do our level best to identify, characterise, quantify, evaluate and where possible reduce the probabilities and/or adverse consequences of various possible events.  

Uncertainty is an inherent part of the problems we typically face. We don't know exactly what might happen, nor how or when, and we aren't entirely sure about the consequences. We worry about factors both within and without our control, and about dependencies and complex interactions that frustrate our efforts to predict and control our fortunes. We adopt fallback and recovery arrangements, and apply contingency thinking with the intention of being better prepared and resourced for unanticipated situations ahead.    

A random comment on LinkeDin set me thinking about the converse: 'reducing uncertainty' is the flip side of 'increasing certainty', in other words information risk management is equally about increasing certainty of beneficial, valuable outcomes such as not suffering the adverse consequences of incidents as often and/or as severely.  It's also about increasing certainty in general, which is why we put so much effort into gathering and assessing information, monitoring and measuring things, implementing mitigating 'information security controls' that give us some semblance of control over the risks.

Assurance is a big part of reducing uncertainty. We check and test things, review stuff and conduct audits to increase both our knowledge of, and our confidence in, the arrangements. We seek to identify and tease out potential issues that need to be addressed in order to avoid nasty surprises. 

Resilience is another chunk. Building the strength and capability to respond effectively and efficiently to whatever might happen, maintaining critical activities throughout, is a powerful approach that extends from individuals through families, teams and departments, to organisations, industries and society at large.

Thanks to those uncertainties, we are inevitably building on shaky foundations. Our information risk management practices and information security controls are imperfect ... but at the same time they earn their keep by generating more value than they cost, for example by:

  • Providing credible information about various situations, allowing us to make rational decisions, prioritise and plan things, allocate appropriate resources etc.;
  • Reducing or constraining the problem space where possible, increasing our ability to focus on The Stuff That Really Matters;
  • Allowing us to consider and deal with potential incidents in advance, knowing that we will struggle to do so during some future crisis. 

Along with assurance and resilience, that added value is clearly a positive, beneficial aspect to information risk management ... in contrast to the rather negative edge on 'reducing uncertainty'. 

I'm not arguing that 'increasing certainty' should be our new mantra, rather that we might be more business-like in how we go about what we do, putting more effort into increasing and talking-up the positives and less into reducing and warning about the negatives. In my experience, managers are more inclined to invest willingly in activities that are positioned as and appear to be value-enhancing and beneficial to the organisation, rather than loss-reducing, even though they amount to the same thing in this context. It's all about perception and emphasis.

More carrot, less stick please.

26 Jun 2021

Are our infosec controls sufficient?

^ Although it's tempting to dismiss such questions as rhetorical, trivial or too difficult, there are reasons for taking them seriously*. Today I'm digging a little deeper into the basis for posing such tricky questions, explaining how we typically go about answering them in practice, using that specific question as an example.

OK, here goes.

The accepted way of determining the sufficiency of controls is to evaluate them against the requirements. Adroitly sidestepping those requirements for now, I plan to blabber on about the evaluation aspect or, more accurately, assurance.

Reviewing, testing, auditing, monitoring etc. are assurance methods intended to increase our knowledge.  We gather relevant data, facts, evidence or other information concerning a situation of concern, consider and assess/evaluate it in order to:

  • Demonstrate, prove or engender confidence that things are going to plan, working well, sufficient and adequate in practice, as we hope; and
  • Identify and ideally quantify any issues i.e. aspects that are not, in reality, working quite so well, sufficiently and adequately. 

Assurance activities qualify as controls to mitigate risks, such as information risks associated with information risk and security management e.g.:

  • Mistakes in our identification of other information risks (e.g. failing to appreciate critical information-related dependencies of various kinds);
  • Biases and errors in our assessment/evaluation of identified information risks (e.g. today’s obsessive focus on “cyber” implies down-playing, perhaps even ignoring other aspects of information security, including non-cyber threats such as physical disasters and human/cultural issues more generally – COVID for instance, just one of many people-related risks), leading to inappropriate risk treatment decisions, priorities, plans and resources;
  • Failures in our treatment of identified and unacceptable information risks (e.g. controls inadequately specified, designed, implemented, used, managed, monitored and maintained, that do not sufficiently mitigate the risks we intended to mitigate, in practice; inattention, incompetence, conflicting priorities and plain mistakes in the processes associated with using, managing and maintaining security controls);
  • Changes in the information risks such as: novel or more/less significant threats; previously unrecognized vulnerabilities; evolving business processes, systems, relationships and people; and myriad changes in the ‘the business environment’ or ‘the ecosystem’ within which our risks and controls exist and (hopefully!) operate;
  • Changes in the information security controls including those that, for various reasons, gradually decay and/or suddenly, unexpectedly and perhaps silently fail to operate as intended, plus those that are overtaken by events (such as the availability of even better, more cost-effective controls); 
  • Invalid or inappropriate assumptions (e.g. that an ISO27k ISMS is sufficient to manage our information risks, management fully supports it, it is well designed and sufficiently resourced etc., and it represents the optimal approach for any given situation); it is unwise to assume too much, especially regarding particularly important matters ... begging questions about which infosec-related matters are particularly important, and how they stack up in relation to other business priorities, issues, pressures etc.;
  • Blind-spots and coverage gaps that leave potentially significant information risks partially or wholly unaddressed because everyone either doesn’t appreciate that they exist (a failure of risk identification), or blithely assumes that someone else is dealing with them (failing to evaluate and treat them appropriately).

Assurance activities also generate and involve metrics - another can of worms there. Whereas certification is an example of a binary pass/fail metric, most forms of assurance aim to measure by degrees, quantifying issues and acknowledging that the world is mostly shades of grey, not black-or-white. The sufficiency of our infosec controls, for instance, may range from zero (wholly inadequate or missing) through barely sufficient, and on through appropriately or perfectly sufficient, to excessive. Yes, it is possible to be 'too secure', wasting resources on unnecessarily strong controls, being so risk averse that legitimate business opportunities are missed. You might even say that excessive security inadequately satisfies general business objectives relating to the optimal use of resources. It harms the organization's overall efficiency.  

There’s a lot to think about here … and I’m not finished yet!

Consider that various forms of assurance are controls just like any other - controls that may themselves be inadequate or excessive, and may partially or wholly fail in practice. Although assurance generally has value, it too has its limits as a control mechanism, such as:

  • Sophisticated and reactive threats such as targeted hacks and fraud – Nick Leeson’s book “Rogue Trader” illustrates the lengths that determined fraudsters will take to undermine, bypass, mislead and essentially evade general and financial management controls and even focused audits, taking advantage of little weaknesses in the control systems and ‘opportunities’ that arise. Information security is replete with examples of malware and hackers;
  • I don't know about you but I’ll freely admit I’ve had my off-days - I’ve made mistakes, missed things, misinterpreted situations, made errors of judgement etc
Speaking as a reformed IT auditor, software tester, information risk and security specialist, consultant, technical author and proofreader, I've learned to temper my perfectionist streak by accepting that finite resources, imposed timescales and competing priorities mean I have to accept 'good enough for now' in order to move on to other things. Having already consumed a good couple of hours, I could continue writing and wordsmithing this very article indefinitely, if it weren't for Having A Life and Other Stuff On My Plate. 

So, since essentially everything (including assurance) is fallible, it is worth considering and adopting suitable resilience, recovery and contingency measures designed to help cope with possible failures – particularly as I said in relation to ‘important matters’, where failures would cause serious problems for the organization. An example of this is the way customers typically probe into the information security, privacy and governance arrangements, the financial stability, capability etc. of their “critical suppliers”, accepting that various assertions, certifications, assurances and legal obligations may not, in fact, totally avoid or prevent incidents. Supplier assessments and the like are forms of assurance to mitigate information risks. Wise businesses have their feelers out, remain constantly alert to the early signs of trouble ahead in their supply networks, have suitable information processes to collect, collate, evaluate and respond to the assurance and other information flowing in, and have strategies to deal with issues arising (e.g. alternative sources of supply; stocks; strong relationships and understandings with their customers and partners plus other suppliers …; oh and an appreciation that, under some circumstances, even supposedly non-critical suppliers may turn out to be critically important after all).

It should be obvious that (given enough resources) we could continue circling around risks indefinitely, using assurance to identify and help address some risks on each lap without ever totally eliminating them as a whole. At the end of the day, even the most competent and paranoid risk-averse organizations and individuals have to accept some residual risks. Too bad! Life’s a bitch! Suck it up! 

Congratulations (or should I say commiserations?!) if you have read this far. I hope to have convinced you that there’s much more to assurance than checking various cyber or IT security controls, given the organization’s interests and objectives, the business context for this stuff. In addition to the technical and human aspects of infosec, there are broader governance, strategic and commercial implications of [information] risk management and assurance. 

Assurance is just a piece of a bigger puzzle. I've sketched the picture on the box.  Have I given you something interesting to mull over this weekend?

Along with "Are we secure enough?" and "How are things going in information security?", these are classic examples of the na├»ve, vague, open-ended challenges that are occasionally tossed at us by colleagues, including senior management. Tempting as it is to offer equally vacuous, non-committal or dismissive responses, they can also indicate genuine concerns or doubts that we infosec pro's should be willing and able, even keen to address. If you are serious about doing just that, I recommend studying PRAGMATIC Security Metrics for further clues about how to frame the issues, gather relevant data and come up with more credible and convincing responses. But then I would, wouldn't I? Lance Hayden's IT Security Metrics and Doug Hubbard's How to Measure Anything are further valuable contributions to the field. This blog piece barely even scratches the surface. 

25 May 2021

Stepping on the cracks

Anyone seeking information security standards or guidance is spoilt for choice e.g.:

Studying these is hard work. Aside from simply keeping up with developments all as they evolve in parallel, taking in their distinct perspectives on essentially the same area plus often subtle difference in their use of language consumes a lot of brain cycles

Naturally there is a lot in common since they all cover [parts of] information security. Commonality and consensus reinforces the conventional approaches of 'generally accepted good security practices', and fair enough. Personally, however, I am fascinated by the differences in their structures, emphasis and content, reflecting divergent purposes and scopes, authors, histories and cultures.

Some focus on the paving slabs. I'm looking out for the cracks.  

ISACA's COBIT, for instance, emphasizes the business angle (satisfying the organization's objectives), whereas various certification standards, laws and regs emphasize the formalities of specification and compliance, addressing societal aspects of information security. At the same time, privacy concerns the rights and expectations of the individual. Three different perspectives.

The recently-published ISO/IEC TS 27570 "Privacy guidelines for smart cities" neatly illustrates the creativity required to tackle new information risks arising from innovation in the realm of IoT, AI and short range data communications between the proliferating portable, wearable and mobile IT devices now roaming our city streets. Likewise with the ongoing efforts to develop infosec standards for smart homes and offices. 

There are opportunities as well as risks here: striking the right balance between them is crucial to the long term success of the technologies, suppliers and human society. Spotting opportunities and responding proactively with sound, generally-applicable advice is an area where standards can really help. It's not easy though.

24 May 2021

News on ISO/IEC 27002

Today I’ve slogged my way through a stack of ~50 ISO/IEC JTC1/SC27 emails, updating a few ISO27001security.com pages here and there on ongoing standards activities.

The most significant thing to report is that the project to revise the 3rd (2013) edition of ISO/IEC 27002 appears on-track to reach final draft stage soon and will hopefully be approved this year, then published soon after (during 2022, I guess).  

The standard is being extensively restructured and updated, collating and addressing about 300 pages of comments from the national standards bodies at every stage.  The editorial team are doing an amazing job!  

The new ‘27002 structure will have the controls divided into 4 broad categories or types i.e. technical, physical, people and ‘organizational’ [=other]:

For comparison, the standard is currently structured into 13 security domains:

‘27002 will nearly double in size, going from 90 to 160 pages or so, thanks to new controls and additional advice including areas such as cloud and IoT security.  Virtually all of the original controls have been retained but most have been reworded for the new structure and current practice … and there’s an appendix mapping the old clauses to the new. 

27001 Annex A is being updated to reflect the changes, and a new version of that standard is due to be published in the 2nd quarter of 2022.  

presume other standards based on ‘27002 (such as ‘27011 and ‘27799) will also be revised accordingly, at some point.

24 Apr 2021

Pre-shocks and after-shocks

Just a brief note today: it's a lovely sunny Saturday morning down here and I have Things To Do.

I'm currently enjoying another book by one of my favourite tech authors: Yossi Sheffi's The Resilient Enterprise*. As always, Yossi spins a good yarn, illustrating a strong and convincing argument with interesting, relevant examples leading to sound advice.

Specifically, I'm intrigued by the notion that major incidents/disasters leading to severe business disruption don't always come "out of the blue". Sometimes (often?), there are little warning signs, hints ahead of time about the impending crisis, little chances to look up from the daily grind and perhaps brace for impact. It ought to be possible to spot fragile supply chains, processes, systems and people, provided we are looking out for them ...   

Here in NZ at the moment, we are being treated to a public safety campaign using the analogy of meerkats, encouraging Kiwis to be constantly on the alert for signs of danger, thinking ahead and hopefully avoiding accidents rather than taking silly chances.  It makes sense. 

So I'm thinking perhaps we should update our template policies on incident reporting and/or incident management to encourage workers to report early warning signs, troubling concerns or situations early, before they turn into actual incidents (which also need to be reported, of course). It's a nice example of the value of security awareness.

* Less than ten bucks from Amazon in hardback, I see today. Even at full price, this book is a bargain, well worth it: now it's a steal! Grab it while it's hot!   

23 Apr 2021

KISS or optimise your ISO27k ISMS?

From time to time as we chat about scoping and designing Information Security Management Systems on the ISO27k Forum, someone naively suggests that we should Keep It Simple Stupid. After all, an ISO27k ISMS is, essentially, simply a way of managing information security, isn't it?

At face value, then, KISS makes sense.

In practice, however, factors that complicate matters for organizations designing, implementing and using their ISMSs include different:

  • Business contexts – different organization sizes, structures, maturities, resources, experiences, resilience, adaptability, industries etc.;
  • Types and significances of risks – different threats, vulnerabilities and impacts, different potential incidents of concern;
  • Understandings of ‘information’, ‘risk’ and ‘management’ etc. – different goals/objectives, constraints and opportunities, even within a given organization/management team (and sometimes even within someone’s head!);
  • Perspectives: the bungee jumper, bungee supplier and onlookers have markedly different appreciations of the same risks;
  • Ways of structuring things within the specifications of ‘27001, since individual managers and management teams have the latitude to approach things differently, making unique decisions based on their understandings, prejudices, objectives and priorities, choosing between approaches according to what they believe is best for the organization (and themselves?) at each point;
  • Pressures, expectations and assumptions by third parties … including suppliers, partners and customers, certification auditors and specialists just like us … as well as by insiders;
  • Dynamics: we are all on constantly shifting sands, experiencing/coping with and hopefully learning from situations, near-misses and incidents, adapting and coping with change, doing our best to predict and prepare for uncertain futures.

As with computer applications and many other things, simplicity obviously has a number of benefits, whereas complexity has a number of costs. Not so obviously, the opposite also applies: things can be over simplified or overly complicated:

  • An over-simplified ISMS, if certifiable, will typically be scoped narrowly to manage a small subset of the organization's information risks (typically just its "cyber" risks, whatever that actually means), missing out on the added value that might be gained by managing a wider array of information risks in the same structured and systematic manner. A minimalist ISMS is likely to be relatively crude, perhaps little more than a paper tiger implemented purely for the sake of the compliance certificate rather than as a mechanism to manage information risks (an integrity failure?). Third parties who take an interest in the scope and other details of the ISMS may doubt the organization's commitment to information risk management, information security, governance, compliance etc., increasing their risks of relying on the certificate. There's more to this than ticking-the-box due diligence - accountability and compliance, for instance.
  • Conversely, an over-complicated ISMS may also be a paper tiger, this time a bureaucratic nightmare that bogs down the organization's recognition and response to information risks and incidents. It may take "forever" to get decisions made and implemented, outpaced by the ever-changing landscape of security threats and vulnerabilities, plus changes in the way the organization uses and depends on information. The ISMS is likely to be quite rigid and unresponsive - hardly a resilient, flexible or nimble approach. If the actual or perceived costs of operating the ISMS even vaguely approach the alleged benefits, guess what: managers are unlikely to support it fully, and will be looking hard for opportunities to cut funding, avoid further investment and generally bypass or undermine the red tape.

So, despite its superficial attraction, KISS involves either:

  • Addressing these and other complicating factors, which implies actively managing them in the course of designing, using and maintaining the ISMS, and accepting that simplicity per se may not be a sensible design goal; or
  • Ignoring them, pretending they don't exist or don't matter, turning a blind ear to them and hoping for the best.
Paradoxically, it is quite complicated and difficult to keep things simple! There are clearly several aspects to this, some that are very tough to ‘manage’ or ‘control’ and many that are interrelated.

I'm hinting at information risks associated with the governance, design and operation of an ISMS - information risks that can be addressed in the conventional manner, meaning whatever convention/s you prefer, perhaps the ISO27k approach, so (using this situation as a worked example) what does that entail?

  1. Establish context: for the purposes of the blog, the scope of this illustrative risk assessment is the design and governance of an ISMS, in the context of any organization setting out to apply ISO/IEC 27001 from scratch or reconsidering its approach for some reason (perhaps having just read something provocative on a blog ...).

  2. Identify viable information risks: I've given you a head start on that, above. With sufficient head-scratching, you can probably think of others, either variants/refinements of those I have noted or risks I have missed altogether. To get the most out of this exercise, don't skip this step. It's a chance to practice one of the trickier parts of information risk management.

  3. Analyze the risks: this step involves exploring the identified risks in more depth to gain a better understanding/appreciation of them. I've been 'analyzing' the risks informally as I identified and named them ... but you might like to think about them, perhaps consider the threats, vulnerabilities, potential incidents and the associated impacts. For example, what are the practical implications of an over-simplified or over-complicated ISMS? What are the advantages of getting it just right? How much latitude is there in that? Which are the most important aspects, the bits that must be done well, as opposed to those that don't really matter as much?
  4. Evaluate the risks: my personal preference is to draw up a PIG - a Probability vs. Impact Graph - then place each of the risks on the chart area according to your analysis and understanding of them on those two scales, relative to each other. Alternatively, I might just rank them linearly. If you prefer some other means of evaluating them (FAIR for example), fine, go ahead, knock yourself out. The real point is to get a handle on the risks, ideally quantifying them to help decide what, if anything, needs to be done about them, and how soon it ought to be done (i.e. priorities).

  5. Treat the risks has at least two distinct steps: (5a) decide what to do, then (5b) do it. Supplementary activities may include justifying, planning, gaining authorization for and seeking resources to undertake the risk treatments, plus various management, monitoring and assurance activities to make sure things go to plan - and these extras are, themselves, risk-related. "Critical" controls typically deserve more focus and attention than relatively minor ones, for instance. Gaining sufficient assurance that critical controls are, in fact, working properly, and remain effective, is an oft-neglected step, in my experience.

  6. Communicate: the written and spoken words, notes, diagrams, PIGs, priority lists, control proposals, plans etc. produced in the course of this effort are handy for explaining what was done, what the thinking behind it was, and what was the outcome. It's worth a moment to figure out who needs to know about this stuff, what are the key messages, and where appropriate how to gain engagement or involvement with the ISMS work. There are yet more information risks in this area, too e.g. providing inaccurate, misleading or out of date information, communicating ineptly with the wrong people, and perhaps disclosing sensitive matters inappropriately.

  7. Monitor and review the risks, risk treatments etc. is (or rather, should be!) an integral part of managing the ISMS design and implementation project, and a routine part of governance and management once the ISMS is operational. The ISMS management reviews, internal audits and external/certification audits are clear examples of techniques to monitor and review, with the the aim of identifying and dealing with any issues that arise, exploiting opportunities to improve and mature, and generally driving out the business value achieved by the ISMS. For me, ISMS metrics are an important part of this, and once more there are risks relating to measuring the wrong things, or measuring things wrong.
So, there we have it. You may still feel that KISS is the obvious way to go, and good luck if you do. Personally, I believe I can improve on KISS to design an optimal ISMS that best satisfies the organization's business objectives, generating greater value. Would you like to put me to the test? Do get in touch: I'm sure I'll enjoy advising you ... at my usual bargain rate!

19 Apr 2021

Policy development process: phase 2

Today we completed and published a new "topic-specific" information security policy template on clear desk and screen.

Having previously considered information risks within the policy scope, writing the policy involved determining how to treat the risks and hence what information security or other controls are most appropriate.  

Here we drew on guidance from the ISO27k standards, plus other standards, advisories and good practices that we've picked up in the course of ~30 years in the field, working with a variety of industries and organizations - and that's an interesting part of the challenge of developing generic policy templates. Different organizations - even different business units, departments, offices or teams within a given organization - can take markedly different attitudes towards clear desk and screen. The most paranoid are obsessive about it, mandating controls that would be excessive and inappropriate for most others. Conversely, some are decidedly lax, to the point that information is (to my mind) distinctly and unnecessarily vulnerable to deliberate and accidental threats. We've picked out controls that we feel are commonplace, cost-effective and hence sensible for most organizations.

COVID19 raises another concern, namely how the risks and controls in this area vary between home offices or other non-corporate 'working from home' workplaces, compared to typical corporate offices and other workplaces. The variety of situations makes it tricky to develop a brief, general policy without delving into all the possibilities and specifics. The approach we've taken is to mention this aspect and recommend just a few key controls, hoping that workers will get the point. Customers can always customise the policy templates, for example adding explicit restrictions for particular types of information, relaxing things under certain conditions, or beefing-up the monitoring, oversight and compliance controls that accompany the policies - which is yet another complicating factor: the business context for information security policies goes beyond the written words into how they are used and mandated in practice.

Doing all of this in a way that condenses the topic to just a few pages of good practice guidance, well-written in a motivational yet generic manner, and forms a valuable part of the SecAware policy suite, explains the hours we've sunk into the research and writing. Let's hope it's a best seller!



13 Apr 2021

Policy development process: phase 1

On Sunday I blogged about preparing four new 'topic-specific' information security policy templates for SecAware. Today I'm writing about the process of preparing a policy template.

First of all, the fact that I have four titles means I already have a rough idea of what the policies are going to cover (yes, there's a phase zero). 'Capacity and performance management', for instance, is one requested by a customer - and fair enough. As I said on Sunday, this is a legitimate information risk and security issue with implications for confidentiality and integrity as well as the obvious availability of information. In my professional opinion, the issue is sufficiently significant to justify senior management's concern, engagement and consideration (at least). Formulating and drafting a policy is one way to crystallise the topic in a form that can be discussed by management, hopefully leading to decisions about what the organisation should do. It's a prompt to action.

At this phase in the drafting process, I am focused on explaining things to senior management in such a way that they understand the topic area, take an interest, think about it, and accept that it is worth determining rules in this area. The most direct way I know of gaining their understanding and interest is to describe the matter 'in business terms'. Why does 'capacity and performance management' matter to the business? What are the strategic and operational implications? More specifically, what are the associated information risks? What kinds of incident involving inadequate capacity and performance can adversely affect the organization?

Answering such questions is quite tough for generic policy templates lacking the specific business context of a given organisation or industry, so we encourage customers to customise the policy materials to suit their situations. For instance:

  • An IT/cloud service company would probably emphasise the need to maintain adequate IT capacity and performance for its clients and for its own business operations, elaborating on the associated IT/cyber risks.
  • A healthcare company could mention health-related risk examples where delays in furnishing critical information to the workers who need it could jeopardise treatments and critical care.
  • A small business might point out the risks to availability of its key workers, and the business implications of losing its people (and their invaluable knowledge and experience i.e. information assets) due to illness/disease, resignation or retirement. COVID is a very topical illustration.
  • An accountancy or law firm could focus on avoiding issues caused by late or incomplete information - perhaps even discussing the delicate balance between those two aspects (e.g. there are business situations where timeliness trumps accuracy, and vice versa).

The policy templates briefly discuss general risks and fundamental principles in order to orient customers in the conceptual space, stimulating them (we hope) to think of situations or scenarios that are relevant to their organisations, their businesses or industries, and hence to their management.

'Briefly' is an important point: the discussion in this blog piece is already lengthier and more involved than would be appropriate for the background or introductory section of a typical policy template. It's easy for someone as passionate and opinionated as me to waffle-on around the policy topic area, not so easy to write succinctly and remain focused ... which makes policy development a surprisingly slow, laborious and hence costly process, given that the finished article may be only 3 or 4 pages. It's not simply a matter of wordsmithing: distilling any topic down to its essentials takes research and consideration. What must be included, and what can we afford to leave out? Which specific angles will stimulate senior managers to understand and accept the premise that 'something must be done'?

OK, that's it for today. Must press on - policy templates to write! I'll expand on the next phase of the policy development process soon - namely, how we flesh out the 'something that must be done' into explicit policy statements.

11 Apr 2021

Infosec policy development

We're currently preparing some new information risk and security policies for SecAware.com.  It's hard to find gaps in the suite of ~80 policy templates already on sale (!) but we're working on these four additions:

  1. Capacity and performance management: usually, an organization's capacity for information processing is managed by specialists in IT and HR.  They help general management optimise and stay on top of information processing performance too.  If capacity is insufficient and/or performance drops, that obviously affects the availability of information ... but it can harm the quality/integrity and may lead to changes that compromise confidentiality, making this an information security issue.  The controls in this policy will include engineering, performance monitoring, analysis/projection and flexibility, with the aim of increasing the organisation's resilience. It's not quite as simple as 'moving to the cloud', although that may be part of the approach.

  2. Information transfer: disclosing/sharing information with, and obtaining information from, third party organisations and individuals is so commonplace, so routine, that we rarely even think about it.  This policy will outline the associated information risks, mitigating controls and other relevant approaches.

  3. Vulnerability disclosure: what should the organisation do if someone notifies it of vulnerabilities or other issues in its information systems, websites, apps and processes? Should there be mechanisms in place to facilitate, even encourage notification? How should issues be addressed?  How does this relate to penetration testing, incident management and assurance?  Lots of questions to get our teeth into!

  4. Clear desks and screens: this is such a basic, self-evident information security issue that it hardly seems worth formulating a policy. However, in the absence of policy and with no 'official' guidance, some workers may not appreciate the issue or may be too lazy/careless to do the right thing. These days, with so many people working from home, the management oversight and peer pressure typical in corporate office settings are weak or non-existent, so maybe it is worth strengthening the controls by reminding workers to tidy up their workplaces and log off.  It's banale, not hard! 
The next release of ISO/IEC 27002 will call these "topic-specific information security policies" focusing on particular issues and/or groups of people in some detail, whereas the organisation's "information security policy" is an overarching, general, high-level framework laying out (among other things) the fundamental principles. Our corporate information security policy template is a mature product that already includes a set of principles, so it may not need changes to comply with the updated ISO/IEC 27002 when published later this year or early next ... but we'll seize the opportunity to review it anyway.