Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Mar 21, 2018

NBlog March 21 - down to Earth

Since "assurance" is a fairly obscure concept, April's awareness materials inevitably have to explain it in simple enough terms that people can grasp it, without glossing over things to such an extent that nothing matters, nothing registers.

Tricky that!

Harder still, our purpose for raising this at all is to emphasize the relevance of assurance to information security - another conceptual area that we're trying hard to make less obscure!

The approach we've come up with is to draw parallels between assurance for information security, and assurance for safety. Safety is clearly something that matters. People 'get it' without the need to spell it out in words of one syllabub. With just a gentle help, they understand why safety testing, for instance, is necessary, and why safety tags and certificates mean something worthwhile - valuable in fact ... and that gives us a link between assurance and business.

For awareness purposes, we'll be using bungy-jumping as a safety-, business- and assurance-related situation that catches attention and sparks imaginations. It's something risky that people can relate to, regardless of whether they have personally done it or not. You could say it is well-grounded. Aside from the emotional connection, it has the added bonus of striking images - great for seminar slides and to break up the written briefings.

We still face the challenge of linking from there across to information security, and that's what the bulk of the awareness materials address, covering assurance in the context of information risk, security, integrity, testing, auditing, trust and more - quite a swathe of relevant issues to discuss in fact. 

Mar 20, 2018

NBlog March 20½ - Facebook assures

Facebook is facing a crisis of confidence on stockmarkets already jittery about interest rates and over-priced tech stocks, thanks to a privacy breach with overtones of political interference:
"Facebook fell as much as 8.1 percent to $170.06 on Monday in New York, wiping out all of the year's gains so far. That marked the biggest intraday drop since August 2015. Facebook said Friday that the data mining company Cambridge Analytica improperly obtained data on some of its users, and that it had suspended Cambridge while it investigates. Facebook said the company obtained data from 270,000 people who downloaded a purported research app that was described as a personality test. The New York Times and the Guardian reported that Cambridge was able to tap the profiles of more than 50 million Facebook users without their permission. Facebook first learned of the breach more than two years ago but hadn't disclosed it. A British legislator said Facebook had misled officials while Senator Amy Klobuchar of Minnesota said Facebook CEO Mark Zuckerberg should testify before the Senate Judiciary Committee ... Daniel Ives, chief strategy officer and head of technology research for GBH Insights, said this is a crisis for Facebook, and it will have to work hard to reassure users, investors and governments."
[NZ Herald, 20th March 2018, emphasis added] 

Attempting to halt and ideally reverse the decline in the extent to which third-parties trust the organization following a major incident is tough, and expensive. Can anyone believe its claims and assurances in future? Will they inspire the same level of confidence that they might once have done? What additional hoops will they be expected to clear in future to reassure others? Will they ever rebuild their credibility and reputation, or is this incident going to haunt them in perpetuity? A lot depends on how the incident is handled.

Facebook and its management will, I guess, spend large to scrape through the crisis with the usual flurry of denials, excuses, explanations/justifications and apologies. Lawyers will profit. Heads may roll, and the suspended relationship with Cambridge Analytica will be 'strained', perhaps to breaking point.

But what of the ongoing relationship with "users, investors and governments"? I wonder if Facebook had a strategy in place to 'reassure' them following a privacy breach or some other major incident? Does it have a business continuity plan for this eventuality? We will see how it plays out over the next few days and weeks, perhaps months given the political and regulatory ramifications.

I'm looking forward to finding out, in due course, whether the controls imposed by GDPR would have helped avoid or mitigate this incident. It's an obvious line of inquiry. The first hints have already emerged with claims that it wasn't a theft of personal information since users gave their permission to share it - but was that a fully-informed free choice, or were they hoodwinked and pressured into it? 

Meanwhile I'm contemplating the lessons to be learned, and wondering if we might use this incident as well as, or instead of, dieselgate as a case study for April's assurance module.

NBlog March 20 - a critique of CIS netsec metrics

Perusing a CIS paper on metrics for their newly-updated recommended network security controls (version 7), several things strike me all at once, a veritable rash of issues.

Before reading on, please at least take a quick squint at the CIS paper. See what you see. Think what you think. You'll get more out of this blog piece if you've done your homework first. You may well disagree with me, and we can talk about that. That way, I'll get more out of this blog piece too!

[Pause while you browse the CIS paper on metrics]

[Further pause while you get your thoughts in order]

OK, here's my take on it:
  1. The recommended controls are numerous, specific and detailed cybersecurity stuff, hence the corresponding metrics are equally granular since the CIS team has evidently decided that each control should be measured individually ... whereas, in practice, I'd be more inclined to take the metrics up a level or three since my main interest in metrics is to make decisions in order to manage things, not to do them nor to 'prove' that someone is doing something. I'm not entirely sure even the network security wonks would welcome or appreciate such detailed metrics: they should already know how they are doing, pretty much, without the need to measure and prove it (to themselves!). Management, on the other hand, could do with something more than just the tech guys telling them "Oh it's all OK!  We're using the CIS guidance!  Nothing to see here - move along!" or "Of course we are terribly insecure: we've told you a million times we need more resources!". I contend that overview/status or maturity metrics would be far more useful for management. [I'll circle back to that point at the end. Skip the rest if this is all too much.]

  2. I guess if all the individual metrics were generated, it would be possible to generate an overall score simply by averaging them (taking the mean and maybe the variance too since that relates to consistency). That could be used as a crude indication of the status, and a lever to drive up implementation, but it would be better to at least group the detailed metrics into categories (perhaps relating to the categories of control) and report each category separately, providing a better indication of where the strengths and weaknesses lie. However, I'm still troubled by the first part: "If all the individual metrics were generated" implies a very tedious and potentially quite costly measurement process. Someone familiar with the organization's network security controls (a competent IT auditor, for instance, or consultant - a reasonably independent, unbiased, diligent and intelligent person anyway) ought to be able to identify the main strengths and weaknesses directly, categorize them, measure and report them, and offer some suggestions on how to address the issues, without the tedium. I figure it's better for the network security pros to secure the network than to generate reams of metrics of dubious value. [More on this below]

  3. I'm sure most of us would challenge at least some of the CIS recommended controls: they mean well but there are situations where the controls won't work out in practice, or they go too far or not far enough, or there are other approaches not considered, or the wording isn't right, or ... well, let's just say there are lots of potential issues way down there in the weeds, and that's bound to be an issue with such prescriptive, detailed, "do this to be secure" check-the-box approaches (I know, I know, I'm exaggerating for effect). Plucking but one example from my own specialism, control 17.4 says "Update Awareness Content Frequently - Ensure that the organization's security awareness program is updated frequently (at least annually) to address new technologies, threats, standards and business requirements."  Updating awareness and training program content to reflect the ever-changing information risk landscape is good practice, I agree, but annually is definitely not, especially if that also implies that it is only necessary to check for changes in the information risks on an annual basis. Hello! Wakey wakey! There is new stuff happening every hour, every day, certainly every few weeks, with potentially significant implications that ought to be identified, evaluated and appropriately responded-to, promptly. Annual updates are way too slow, a long way short of "frequent" to use their word. Furthermore, the metric for 17.4 is equally misleading: "Has the organization ensured that the organization's security awareness program is updated frequently (at least annually) to address new technologies, threats, standards and business requirements: yes/no?"  Using their metric, any sort of 'update' to the awareness program that happens just once a year justifies answering yes - ticking the box - but to me (as an awareness specialist) that situation would be woefully inadequate, indicative of an organization that patently does not understand the purpose and value of security awareness and training. In that specific example, I would suggest that the frequency of meaningful reviews and updates to the information risk profile and the awareness and training program would be a much more useful metric - two in fact since each aspect can be measured separately and they may not align (hinting at a third metric!). 

  4. The underlying problem is that we could have much the same discussion on almost every control and metric in their list. How many are there in total?  Over 100, so that's roughly 100 discussions. Pains will be taken! Set aside a good few hours for that, easily half to a whole a day. You could argue that we would end up with a much better appreciation of the controls and the metrics ... but I would counter that there are better ways to figure out worthwhile metrics than to assess/measure and report the implementation status of every individual control. That half a day or so could be used more productively.

  5. My suggestion to use 'frequency of risk and awareness updates' reminds me of the concern you raised, Walt.  Binary metrics are crude while analog metrics are more indicative of the true status, particularly in boundary cases where a straight yes or no does not tell the whole story, and can be quite misleading (e.g. as I indicated above). Binary metrics and crude checklists are especially problematic if the metrician has flesh in the game (which would be true if the CIS network security metrics were being measured and reported by network security pros), and if the outcome of the measurement may reflect badly or well on them personally. The correct answer is of course "Yes" if the situation clearly and completely falls into the "Yes" criterion, but what if the situation is not quite so clear-cut? What if the appropriate, honest answer would be "Mostly yes, but slightly no - there are some issues in this area"? Guess what: if "Yes" leads to fame and fortune, then "No" doesn't even get a look-in! In extreme cases, people have been known to ignore all the "No" situations, picking out a single "Yes" example and using that exception, that outlier, to justify ticking the "Yes" box. This is of course an information risk, a measurement bias, potentially a significant concern depending on how the metrics are going to be used. The recipient and user of the metrics can counter the bias to some extent if they are aware of it and so inclined, but then we're really no better off than if they just discussed and assessed the situation without binary metrics. If they are unaware of the bias and unwisely trusting of the metric, or if they too are biased (e.g. an IT manager reporting to the Exec Team on the network security status, using the 'facts' reported up the line by the network security team as their get-out-of-jail-free card - plausible deniability if it turns out to be a tissue of lies), then all bets are off. There are situations where such biased metrics can be totally counterproductive - leaving us worse off than if the metrics did not exist (consider the VW emissions-testing scandal, plucking a random example out of the air, one that I brought up yesterday in relation to assurance).

  6. Furthermore, I have concerns about the CIS version of an analog metric in this document. Someone at CIS has clearly been on the 'six sigma' training, swallowed the Cool-aid, and directly transferred the concept to all the analog metrics, with no apparent effort to adapt to the situation. Every CIS analog metric in the paper has the identical form with identical criteria for the 6 levels:  69% or less; 31% or less; 6.7% or less; 0.62% or less; 0.023% or less; 0.00034% or less. That categorization or gradation really doesn't make a lot of sense in every case, leading to inconsistencies from one metric or one control to the next. I challenge anyone to determine and prove the distinction between the upper three values on their scale for any real-world network security measurement in the table, at least not without further measurement data (which sort of defeats the purpose) ... so despite the appearance of scientific rigour, the measurement values are at least partially arbitrary and subjective anyway. Trying to shoe-horn the measurement of a fair variety of network security control implementation statuses into the same awkward set of values is not helpful. For me, it betrays a lack of fundamental understanding of six-sigma, continuous improvement and process maturity.  Frankly, it's a mess.

  7. Returning to the idea of averaging scores to generate overall ratings, that approach is technically invalid if the individual values being averaged are not equivalent - which they aren't for the reasons given above. Seems to me The Big Thing that's missing is some appreciation and recognition of the differing importance or value of each control. If all the controls were weighted, perhaps ranked or at least categorized (e.g. vital, important, recommended, suggested, optional), there would be a better basis for generating an overall or section-by-section score. [In fact, the process of determining the weightings or ranking or categorization would itself generate valuable insight ... a bonus outcome from designing better security metrics! The CIS controls are supposedly 'prioritized' so it's a shame that approach didn't filter down to the metrics paper.] One thing we could do, for example, is ignore all except the vital controls on a first pass: get those properly specified, fully implemented, operational, actively managed and maintained would be an excellent starting point for an organization that has no clue about what it ought to be doing in this space. Next pass, add-in the important controls. Lather-wash-rinse-repeat ...

Overall the CIS paper, and bottom-up metrics in general, generate plenty of data but precious little insight - quantity not quality.

Earlier I hinted that as well as their use for decision-making and managing stuff, metrics are sometimes valued as a way of ducking accountability and reinforcing biases. I trust anyone reading this blog regularly knows where I stand on that.  Integrity is a core value. 'Nuff said.

If I were asked to design a set of network security metrics, I would much prefer a top-down approach (e.g. the goal-question-metric method favoured by Lance Hayden, or a process/organizational maturity metric of my own invention), either instead of, or as well as, the bottom-up control implementation status approach and other information sources (e.g. there is likely to be a fast-flowing stream of measurement data from assorted network security boxes and processes). 

Perhaps these alternatives are complementary? I guess it depends on how they are used - not just how they are meant or designed to be used, but what actually happens in practice: any metric (even a good one, carefully designed, competently measured, analyzed and reported with integrity) can be plucked out of context to take on a life of its own as people clutch at data straws that reinforce their own biases and push their own agendas. See any marketing-led "survey" for clear evidence of that! 

Mar 19, 2018

NBlog March 19 - a thinking day

Today was a thinking day - time away from the office doing Other Stuff meant my reluctant separation from the keyboard and a chance to mull over the awareness materials for April, free of distractions.

I returned sufficiently refreshed to catch up with emails and press ahead with the writing, and inspired enough to come up with this little gem:

I say 'gem' because that single (albeit convoluted) statement helps us explain and focus the awareness module.  We will explain assurance in terms of confidence, integrity, trust, proof etc. and discuss the activities that get us to that happy place, or not as the case may be. 

Discovering any problems that need to be addressed is an important and obvious part of various forms of testing, but so too is giving the all-clear. Gaining assurance, either way, is the real goal, supporting information risk management: if you discover, later, that the testing was inept, inadequate, biased, skipped or otherwise lame, the whole thing is devalued, and worse still the practice of testing is undermined as an assurance measure. 

Take for example dieselgate - the diesel emissions-testing scandal involving Volkwagen vehicles: in essence, some bright spark at VW allegedly came up with a cunning scheme to defeat the emissions testing lab by switching the vehicle's computer control unit to a special mode when it detected the conditions indicating a test in progress, reverting to a less environmentally-friendly mode for normal driving. Ethics and legality aside, the scandal brought a measure of doubt onto the testing regime, and yet the trick was (eventually) discovered and the perpetrators uncloaked, bringing greater disrepute to VW. 

Hmmm, that little story might make an interesting case study scenario for the module. If it makes people think and talk animatedly about the information risk aspects arising (assurance in particular but there are other relevant issues too), that's a big awareness win right there. Job's a good 'un. Thank you and good night.

Mar 18, 2018

NBlog March 18 - building a sausage machine

We've been engaged to write a series of awareness materials on a variety of information security topics - a specific type of awareness product that we haven't produced before. So the initial part of the assignment is to clarify what the client wants, come up with and talk through our options, and draft the first one. 

That's my weekend spoken for!

Once the first one is discussed, revised and agreed, stage two will be to refine the production process so future products will be easier and quicker to generate, better for the client and better for us.

Like sausages. We're building a sausage machine. We'll plug in a topic, turn the handle and extrude a perfectly-formed sausage every time.

Sounds fine in theory but on past experience that's not quite how it will work out, for two key reasons:
  1. Since the topics vary, the content of the awareness product will vary, naturally ... but so too may the structure and perhaps the writing style. Awareness content on, say, viruses or passwords is conceptually and practically a bit different to that on, say, privacy or cybersecurity. The breadth and depth of cover affects how we write, so the machine needs some 'give'. It can't be too rigid.
  2. As the string of sausages gets ever longer, we will continually refine the machine and think up new wrinkles ... which may even mean going back and reforming some of the early products. It's possible an entirely new approach may emerge as we progress, but more likely it will evolve and mature gradually. What starts out producing a string of plain beef sausages may end up churning out Moroccan lamb and mint - still definitely sausages but different flavours. 
Knowing that, now, the sausage machine has to be capable of being modified to some extent in the future, within certain constraints since the customer expects a reasonably consistent product. Some features being designed into the process today will remain in a month or three, while others will evaporate to be replaced by others and we're cool with that. Hopefully the client will be too!

In more practical terms, the sausage machine itself consists of a document template with defined styles in MS Word. The template and styles can be tweaked as we go along. While the production process is presently undocumented, it is sufficiently close to our normal everyday activities that there's really no need to formalise it: we are well practiced at this stuff, running on auto. It helps that the template and styles are self-evident.

If you are 'doing' awareness with a planned series of awareness items or activities, I'd encourage you to adopt a similar, structured and planned sausage-machine approach, investing some effort up front into designing the production machinery and process. It's an obvious way to gain consistency and take advantage of continuous improvement. Once the production line is running sweetly, it lets you focus on the meat of the topic - the creative content - rather than on the production process. While it may need care and maintenance from time to time, the mechanistic process makes it easier to keep on going.

Mar 17, 2018

NBlog March 17 - assurance functions

Of all the typical corporate departments or functions or teams, which have an assurance role?
  • Internal Audit - audits are all about gaining and providing assurance;
  • Quality Assurance plus related functions such as Product Assurance, Quality Control, Testing and Final Inspection, Statistical Process Control and others;
  • Risk Management - because assurance reduces uncertainty and hence risk;
  • IT, Information Management, Information Risk and Security Management etc. - for example, ensuring the integrity of information increases assurance, and software quality assurance is a big issue;
  • Information Security Management - which is of course why this is an information security awareness topic;
  • Business Continuity Management - who need assurance on everything business-critical;
  • Health and Safety - who need assurance on everything safety-critical;
  • Production/Operations - who use QA, SPC and many other techniques to ensure the quality and reliability of production methods, processes and products;
  • Sales and Marketing who seek to assure and reassure prospects and customers that the organization is a quality outfit producing reliable, high-quality products, building trust in the brands and maintaining a strong reputation;
  • Procurement - who need assurance about the raw materials, goods and services offered and provided to the organization, and about the suppliers in a more general way (e.g. will they deliver orders within specification, on time, reliably? Will the relationship and transactions be worry-free?);
  • Finance - who absolutely need to ensure the integrity of financial information, and who perform numerous assurance measures to achieve and guarantee that;
  • Human Resources - who seek to reassure management that the organization is finding and recruiting the best candidates and making the best of its people; 
  • Legal/Compliance - need to be sure that the organization complies sufficiently with external obligations to avoid penalties, and that internal obligations are sufficiently fulfilled to achieve business advantage;
  • Every other department, function or team that depends on information, or that delivers important information to others ... in other words, everyone;
  • Management as a whole - for instance governance and oversight are both strongly assurance-related, and most metrics are designed to assure recipients that everything is on-track, going to plan, working well etc.;
  • The workforce as a whole - since everyone needs to know they can depend on their jobs and livelihoods.
Looking further afield, outside the organization, assurance is also of concern to third-parties such as:
  • External Audit and similar external inspection functions such as certification auditors for ISO27k, PCI;
  • Customers - who need to know the products they are buying will deliver the benefits promised and anticipated;
  • Suppliers - who need to know they will be paid and would like to rely on future business;
  • Owners of the organization, with an obvious interest in its health and prosperity;
  • Various authorities, the tax man for instance;
  • Society at large - since discovering something unexpected and untoward about any organization is generally shocking.
So it turns out that assurance is a widespread issue, stretching well beyond the obvious assurance-related functions such as Audit and QA ... which makes it a surprisingly strong candidate for security awareness purposes. Although we haven't produced an assurance awareness module before, we've covered integrity, audit, oversight and other things. This time around it's an opportunity to focus-in on and explore the assurance element in more depth, while once again reinforcing the core security awareness messages on integrity, trust, risk, control etc.

The lists of corporate functions and third-parties above will make its way into the train-the-trainer guide in April's awareness module, encouraging the security awareness people to figure out who they might contact within the organization for help with their awareness efforts, and for genuine examples, incidents or business situations where assurance is crucial. The external interested parties might also be of interest: just imagine the awareness impact of an important customer representative talking honestly about the value of being able to trust in and depend upon the organization, and the negative impact of quality or other issues.

Mar 16, 2018

NBlog March 16 - word games

The assurance word-art tick (or boot?) that we created and blogged about a few days ago is still inspiring us. In particular, some assurance-related words hint at slightly different aspects of the same core concept:
  • Assure
  • Assurance
  • Assured
  • Assuredly
  • Ensure
  • Ensured
  • Insure
  • Insurance
  • Reassure
Along with the tongue-in-cheek terms 'man-sure' and 'lady-sure', they are all based on 'sure', being a statement of certainty and confidence.

Insure is interesting: in American English, I believe it means the same as ensure in the Queen's English (i.e. being certain of something), but in the Queen's English, insure only relates to the practice of insurance, when some third-party offers indemnity against particular risks.

Assured, ensured and insured are not merely the past tenses of the respective verbs, but have slightly different implications or meanings:
  • If someone is assured of something, they have somehow been convinced and accept it as true. They internalize and no longer question or doubt their belief to the same extent as if they were not assured of it. They rest-assured, generally as a result of a third-party providing them the assurance if they don't convince themselves;
  • Someone who ensured something made certain it was so or at least made the effort to do so (they don't always succeed!). This often means passing responsibility to a third-party who they believe will do as required;
  • In the Queen's English, a company that insured something provided the indemnity (insurance cover) to whoever had it insured. In American English, the previous bullet applies, presumably.
Reassure is different again, with connotations of comfort and relief when doubt is dispelled.

The point of this ramble (finally!) is that there are some interesting subtleties to assurance that we can use in the awareness and training materials to get people thinking about it and maybe re-evaluating their own beliefs. The words aren't the intriguing bit so much as the concept, but the jumble of words is a way to get the brain cells in gear.

Mar 15, 2018

NBlog March 15 - scheduling audits

One type of assurance is audit, hence auditing and IT auditing in particular is very much in-scope for our next security awareness module.

By coincidence, yesterday on the ISO27k Forum, the topic of 'security audit schedules' came up.

An audit schedule is a schedule of audits, in simple terms a diary sheet listing the audits you are planning to do. The usual way to prepare an audit schedule is risk-based and resource-constrained. Here's an outline (!) of the planning process to set you thinking, with a sprinkling of Hinson tips:

  1. Figure out all the things that might be worth auditing within your scope (the 'audit universe') and list them out. Brainstorm (individually and if you can with a small group of brainstormers), look at the ISMS scope, look for problem areas and concerns, look at incident records and findings from previous audits, reviews and other things. Mind map if that helps ... then write them all down into a linear list.
  2. Assess the associated information risks, at a high level, to rank the rough list of potential audits by risk - riskiest areas at the top (roughly at first -'high/medium/low' risk categories would probably do - not least because until the audit work commences, it's hard to know what the risks really are). 
  3. Guess how much time and effort each audit would take (roughly at first -'big/medium/small categories would probably do - again, this will change in practice but you have to start your journey of discovery with a first step).
  4. In conjunction with other colleagues, meddle around with the wording and purposes of the potential audits, taking account of the business value (e.g. particular audits on the list that would be fantastic 'must-do' audits vs audits that would be extraordinarily difficult or pointless with little prospect of achieving real change). If it helps, split up audits that are too big to handle, and combine or blend-in tiddlers that are hardly worth running separately. Make notes on any fixed constraints (e.g. parts of the business cycle when audits would be needed, or would be problematic; and dependencies such as pre/prep-work audits to be followed by in-depth audits to explore problem areas found earlier, plus audits that are linked to IT system/service implementations, mergers, compliance deadlines etc.).
  5. Sketch out the scopes and purposes of the audits, outline the risks they address, scribble notes to be used by the auditors and auditee/clients when it comes to detailed audit planning and authorization of individual audits.
  6. Starting at the top of the list, add a column for a a cumulative running total of the resources needed (e.g. with an estimated 20 man-days required for audit 1, 10 man-days for audit 2, 25 man-days for audit 3, the cumulative resource column shows 20 then 30 then 55 man-days ...).
  7. If you have an audit person or team already assigned, figure out how many man-days of audit resources you have in the year/s ahead. Hinson tip: be conservative. It's never a problem to find more work to do, but it's always a problem to try to squeeze too much out of the person/team so that tempers fray and quality suffers. Be sure to leave some unassigned resources to cope with 'special investigations' (e.g. fraud work), time for audit planning and admin, time for team-building, training and personal development, and (trust me) plenty of contingency for jobs that run over and extra must-do jobs that materialize out of nowhere during the planned period. Draw a pencil line on the list under the audits you can complete with the available resources, and those you probably cannot do. Add a grey area (above the line!) to show that there is significant uncertainty in the plan. Tidy-up the rough plan so it is not quite so rough - presentable even.
  8. Present and discuss the outline plan with senior management. Use your prep-work and notes to outline and explain/justify the audit jobs towards the top of the list, or any stand-outs of particular note. Impress on them that this is not some random noise but there has been thought put into it. Negotiate the contents (audits planned, scopes and purposes, resources needed, resources available, contingency remaining) until you reach a tentative settlement, firming-up your audit schedule. If they insist on moving your pencil line down the list to complete more audits, then insist on the additional resources necessary (more auditors - employees or contractors or secondees) ... and preferably put it down in writing (make sure it is minuted)! Hinson tip: although there will undoubtedly be pressure, stick to your guns on the man-days you estimated are required for each audit. Do not arbitrarily cut back the resources for audits unless they agree to reduce the scope of work accordingly ("minute that, please"): do not allow the quality of audit work to be compromised - together you are investing in assurance, and the reputation of the audit function is an extremely important part of that. Hinson tip: you have some leeway on the timing, title and detailed scope of each audit, but do not chop planned audits from the list without putting up a spirited defense. This is where your prep-work and notes come into play. Play hard-ball if a manager seems determined to chop out an audit in their area: why is that?Do they have something to hide? Or are there genuine business reasons that mean the planned audit would not help the organization? Under extreme pressure to chop a legitimate audit off the plan, 'take the discussion off-line' and work privately with the manager concerned, plus their manager, to evaluate the situation and reassess the risks - or perhaps ask the management team as a whole to make the decision there and then. As a last resort, try to convince the CEO or Chairman of the Board that, in your professional judgment, they need additional assurance in that specific area. And if the final answer is "Chop it!", get that in writing.
  9. Turn the list into a schedule that works, in theory. This step is tricky as it involves juggling audits, resources, objectives, dependencies and constraints (e.g. an internal audit to make sure your ISMS is running sweetly before a scheduled external ISMS certification or surveillance audit obviously has a fixed completion date, so work back from there ... and add slack time/contingency too). Involve the team and colleagues if you can. Hinson tip: version control or date the plan.
  10. Once firmed-up, have the finalized plan formally approved by senior management e.g. the CEO, CISO, CIO, President or Chairman of the Board. Don't neglect this simple but critical step.
  11. Build and brief the team and run the plan. Make it happen. Do and manage stuff. Deal with all the wrinkles that come up In Real Life. Remind auditors and auditees that senior management agreed and formally approved the plan and the resources (that's why step 10 is crucial). Motivate, lead, encourage. Jiggle resources and scopes to make the best of it. Adjust the plan and audits as necessary ... and keep notes for the next round of planning or re-planning. Do your level best not to have to go back to senior management with a request for more resources or an explanation about why you cannot possibly complete the approved plan. Hinson tip: use your contingency sparingly throughout the entire period and monitor it carefully. If a quarter of the plan is complete but you've used half your contingency already, we have a problem Houston.

If that's all too much for you and way over the top, then a much simpler starting point is to map-out the audits you think you will be doing on a wall-planner or the year-to-a-view page in your desk diary. Hinson tip: use dry-wipe erasable markers or pencil!

It gets easier and better with practice, like anything really. Except finding things in the fridge: that's always impossible, for men.

[We will turn that into some sort of pro briefing, procedure or checklist for the awareness module, with a process diagram, a succinct summary and careful layout/formatting to make it more readable - e.g. isolating the tips as side notes in text boxes in a contrasting color. Easy when you know how! We're already working on similar guidance for other types of assurance work, such as testing.]

Mar 13, 2018

NBlog March 13 - normal service ...

... will be resumed, soon. We've been slaving away on a side project, putting things in place, setting things up, trying things out. It's not quite ready to release yet - more tweaking required, more polishing, lots more standing back and admiring from a distance - but it's close.

Mar 9, 2018

NBlog March 9 - word cloud creativity

Yesterday I wrote about mind mapping. The tick image above is another creative technique we use to both explore and express the awareness topic.

To generate a word cloud, we start by compiling a list of words relating in some way to the area. Two key sources of inspiration are: 
  1. The background research we've been doing over the past couple of months - lots of Googling, reading and contemplating; and 
  2. Our extensive information risk and security glossary, a working document of 300-odd pages, systematically reviewed and updated every month and included in the NoticeBored awareness modules
Two specific terms in that word cloud amuse me: "Man-sure" and "Lady-sure" hint about the different ways people think about things. When a lay person (man or woman!) says "I'm sure", they may be quite uncertain in fact. They are usually expressing a subjective opinion, an interpretation or belief with little substance, no objective, factual evidence. It can easily be wrong and misleading. When a male or female expert or scientist, on the other hand, says "I'm sure", their opinion typically stems from experience, and carries more weight. It is less likely to be wrong, and hence provides greater assurance. This relates to integrity, a core part of information security. It's not literally about sex.

Aside from integrity and assurance, we have defined more than 2,000 terms-of-art in the glossary, with key words in the definitions hyperlinked to the corresponding glossary entries. I use it like a thesaurus, following a train of thought that meanders through the document, sometimes spinning off at a tangent but always triggering fresh ideas. Updating the glossary is painstaking yet creative at the same time.

Getting back to the word cloud, we squeeze extra value from the list of words by generating puzzles for the modules. Our word-searches are grids of letters that spell out the words in various directions. Finding the words 'hidden' in the grid is an interesting, fun challenge in itself, and also a learning process since the words all relate to the chosen topic.

There are other aspects to the word cloud graphic:
  • All the words are relevant to the topic, to some extent;
  • More significant words are emphasized by size and colour;
  • Insignificant words are tiny and intentionally quite hard to read, fading into the distance and hinting that there are yet more just out of sight;
  • The graphical shape of the cloud (the mask) relates to the topic. It is meant to be a tick in this particular example, although it also resembles an old boot! An accompanying assurance word cloud in the shape of a cross hopefully clarifies the intention;
  • The graphic is visually appealing or intriguing. It catches the eye and stimulates people to think about the topic - an awareness win in its own right. We use word clouds, diagrams and other graphics to illustrate other awareness materials and break up the text.