Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Mar 30, 2018

NBlog March 30 - quality assurance

Our own assurance measures kick into top gear about now with the impending completion of the next awareness module - specifically proofreading and final corrections on the awareness materials before they are packaged up for delivery.

Like any craftsmen, we take pride in our work. It's what we do, our specialism. We strive to make our output as good as we possibly can, a perfectionist streak that probably goes beyond what's strictly necessary. It flows from our deep-set belief in the value of integrity, both as individuals and as a business.  It matters.

Quality assurance is integral to our production process. Checking our finished work (quality control) is the final stage and an opportunity for me to take stock. Having had my head inside the topic all month, it's good to step back for a look at the whole package of awareness goodies as it comes together. Provided the proofreading reveals few issues, I'm reassured that we did a good job, bringing the month's activity to a satisfying close. Hearing that there were "No errors found, no changes needed" always raises a smile.

As an awareness specialist and information security professional, it worries me when I hear people recommending awareness materials freely available on the Web because I know what that means. Sure there is stuff out there, plenty of volume and some variety, but what about the quality? I'm naturally critical thanks to that perfectionist streak I mentioned. I see everything from technical flaws, biases and glaring omissions, down to grammatical errors and speling misteaks - things that will surely confuse, distract and mislead readers if the materials are used.

I see a curious reluctance to invest in awareness, given that the substantial investment in antivirus software, firewalls, security guards and all the rest is enabled and enhanced by awareness and training.  Does penny-pinching on awareness content reflect a lack of understanding and appreciation by management of the business value of awareness (due, I guess, to their own lack of awareness)? And what does it say about organizational commitment to information risk, security, privacy, compliance etc.? 

While there are some gems, among the free materials I often spot logical errors, bad advice, inconsistencies, outmoded concepts and outdated examples ... and I worry about the same issues in our own materials, especially when we are pushing the boundaries by exploring new topics. We're not immune, we have our constraints and biases too. So when customers come back to renew their subscriptions, recommend us to their peers and express their gratitude for the materials, that's a real confidence-booster - the ultimate in assurance you could say.

Mar 29, 2018

NBlog March 29 - smart assurance

With just days to go to the delivery deadline, April's NoticeBored security awareness module on assurance is rounding the final corner and fast approaching the finishing line.

I've just completed updating our 300+ page hyperlinked glossary defining 2,000+ terms of art in the general area of information risk management, security, privacy, compliance and governance. Plus assurance, naturally.

As I compiled a new entry for Dieselgate, it occurred to me that since things are getting smarter all the time, our security controls and assurance measures need to smarten-up at the same rate or risk being left for particulates. Emissions and other type-testing and compliance verification for vehicles needs to go up a level, while the associated safety and technical standards, requirements, laws and regulations should also be updated to reflect the new smart threats. In-service monitoring and testing becomes more important if we can no longer rely on lab tests, but that creates further issues and risks relating to the less-well-controlled environment such as problems with inconsistencies and calibration, as well as the practical issues of testing products while they are being used. Somehow I doubt in-service testing will prove cheaper and quicker than lab tests!

Product testing is a very wide field. Take medical products for instance: there are huge commercial pressures associated with accredited testing and certification, with implications on safety and profitability. Presumably smart pacemakers or prosthetics could be programmed to behave differently in the lab and in the field, in much the same way as those VW diesel engines. Same thing with smart weapons, smart locks, smart white goods and more. I'm not entirely sure what might be gained by beating the system although it's not unreasonable to assume that 'production samples' provided for approval testing and product reviews will have thicker gold plating than the stuff that makes it to market. 

The more things are software-defined, the greater the possibility of diversity and unanticipated situations in the field. The thing that passed the test may be materially different to the one on the shelf, and it could easily change again with nothing more than a software update or different mode of operation.

At the same time, testing is being smartened-up. For decades already, lab test gear has been increasingly computerized, networked and generalized, allowing more sophisticated, reliable and comprehensive tests. I guess the next logical step is for the test gear to communicate with the equipment being tested to interrogate its programming and configuration, supplementing more conventional tests ... and running straight into the assurance issue concerning the extent to which the information offered can be trusted.

The various types of assurance required by owners/investors, authorities and regulators can be made smarter too, through the use of more sophisticated data collection and analysis - with the same issue that fraudsters and other unethical players are increasingly likely to try to beat the tests and conceal their nefarious activities through smarts. Remember Enron and Barings Bank? There are significant implications here for auditors, inspectors and other forms of oversight and rule-checking.

"At what point would you like your product to comply with the regulations, sir?"

The Iraqi/US WMD fiasco is another strong hint that deadly games are being played in the defense domain, while fake news and reputational-engineering are further examples of the information/cyberwars already raging around us. Detecting and hopefully preventing election fraud gets tougher as election fraudsters become smarter. Same with bribery and corruption, plus regular crimes.

Despite being "weird" (I would say unconventional, creative or novel), assurance has turned out to be a fascinating topic for security awareness purposes, with implications that only occurred to me in the course of researching and preparing the materials. I hope they inspire at least some of our customers' people in the same way, and get them thinking more broadly about information risk ... because risk identification is what launches the risk management sequence. If you don't even recognize a risk as such, you're hardly going to analyze and treat it, except by accident - and, strangely, that does not qualify as best practice.

Mar 27, 2018

NBlog March 27 - assurance and business continuity


Business continuity management involves three distinct but complementary approaches:
  1. Resilience arrangements aim to maintain essential/critical information services despite incidents if at all possible, at a reduced, fallback or emergency service level at least;
  2. Disaster recovery arrangements to recover and restore services that have failed for whatever reason (including failed or overwhelmed resilience);
  3. Contingency arrangements to help the organization cope with whatever situations turn up unexpectedly (including failures in the other approaches, plus other novel incidents and crises, unfortunate coincidences and extreme/outlier risks involving Little Green Men From Mars).
Resilience is often neglected or misunderstood, yet it’s a valuable approach with benefits under normal operational conditions as well as during and following major incidents. Plenty of capacity generally means good performance, for instance. Assurance is another advantage: it is feasible to test various failure scenarios on a setup that has been professionally engineered for resilience, with low risk and little if any impact on production services – “professionally engineered” being key of course. Low risk is not zero risk … but surely that’s better than not being able to test at all for fear of failure!

DR is conventional. I'll leave it there.

Contingency is another valuable concept that revolves around the people more than the technology. When faced with a major incident, crisis or disaster, will your organization fall apart or pull together? Under extreme stress, do workers give up, dejectedly, or knuckle-down and get creative? Over-reliance on specific individuals in critical roles is a warning sign (obvious in hindsight but not too hard to spot in advance), whereas if workers are multi-skilled, broadly competent and willing to step up to any challenge, the organization is more likely to get through tricky situations. The same thing applies to over-reliance on key suppliers, partners and customers, networks, systems, data, cloud services or whatever. Knowing when reliance has become over-reliance is yet another assurance issue.

Generally speaking, it's good to have alternatives or options. If the organization has little choice, the things it relies so heavily upon had better be highly resilient and well-engineered just-in-case, touching on all three business continuity approaches. There’s also a clear link to risk management, governance and assurance.  

Business continuity management rocks!

Mar 26, 2018

NBlog March 26 - repetitititition

It is often said (repeatedly in fact) that repetition is the key to learning. Well is that true? Is that a fact? It must be true if it is said often enough, surely?  

This blog piece is about using and misusing repetition as an awareness technique, repeatedly.

You may have come across the classic 3-step tell-em technique for classes, lectures and seminars:

  1. Tell them what you're about to tell them about.
  2. Tell them it.
  3. Tell them about what you told them about.
It's a simple, or rather simplistic approach, a crude technique based on simple repetition. You have probably sat through repetitive classes, lectures and seminars by teachers or speakers that follow the advice slavishly, every time, some of them even pointing out what they are doing as if that helps. It's obvious, without being pointed out. You don't need to tell us that you're using the tell-em technique! 

In my experience, the tell-em technique is most often used by teachers and presenters who are not comfortable teaching and presenting: they are still practicing, repeating the same basic, tedious approach until/unless someone points out that it's not the most effective technique, if we're lucky.

Repetition is one way to teach and learn, certainly, but not the only way. There are other forms of teaching and learning apart from repetition. Learning and teaching, teaching and learning, can take place without repetition, however repetition can be a useful technique for learning. And teaching. 

Repeating things is the essence of practicing, gradually becoming familiar with whatever it is - especially by repeating physical activities such as yoga, skateboarding, teeth-cleaning, yoga or escaping a burning building. Repeating activities such as yoga makes them familiar, well-practiced. Eventually with sufficient repetition they become subconscious, autonomous or 'natural' as we master them. 

Unfortunately, subconscious or autonomous responses can be exploited by social engineers. "Hurry, click here to prevent your account being frozen!" they tell us, hoping we'll reflexively click the link and login without a moment's thought. They might even repeat it. "We warned your account would be frozen, and so it is. Click here to unfreeze it."

Unfortunately, with repetition, things can also become boring. Who isn't sick of endless repeats on TV? Are you getting as bored as me with the repetition in this very piece? Repetitive isn't it?! Over and over and over! "Get on with it!" I hear you screaming. "Stop the repeats already!  You've made your point! We get it!" 

Repetition is also used to emphasize things. It emphasizes them, makes them stand out by repeating them. Emphatically. "I told you not to do that.  I told you so!" or "I've told you a million times: don't exaggerate!".

Repetition is obvious in branding. Advertisements repeat brand images and tag lines, endlessly ... and they also repeat messages at a less obvious/more subtle level. "The real thing!" is not just a tag for a fizzy drink product, but a direct appeal to avoid the near-identical but plainly inferior fizzy drinks made by competitors. "Finger-lickin' good" curiously suggests that licking one's fingers indicates extreme oral pleasure, not just bad manners. The Dyson brand is associated with innovative technology through being used repeatedly on novel products, perhaps avoiding the association with vacuum cleaners that plagues Hoover.

We use repetition ourselves. We have brands. We say some things over and over, although we're not usually quite as blatant about it as in this blog piece. We say things in different ways, for instance, and in different contexts. We use diagrams and words to describe the same stuff, each reinforcing the other. We draw out the main messages from briefings and presentations as summaries or conclusions. We take differing perspectives, different points of view, drawing out different messages. We update the awareness content to reflect what's happening today, rather than just churning out the same old same old every time until it goes stale. We seldom resort to repetition for emphasis, preferring visual techniques such as bold, italics, enlargement or even engorgement, and color
Side-bars
catch the
eye.

So, there you go, a repetitive piece about the perils of excessive repetition, and some other less-repetitive approaches.

Let me close by reminding you that this was about repeti ... oh, I see, you've nodded off.

Mar 23, 2018

NBlog March 23 - assurance metrics


Today I'm writing about 'security assurance metrics' for April's NoticeBored module. 

One aspect that interests me is measuring and confirming (being assured of) the correct operation of security controls. 

Such metrics are seldom discussed and, I suspect, fairly uncommon in practice.

Generally speaking, we infosec pros just love measuring and reporting on incidents and stuff that doesn't work because that helps us focus our efforts and justify investment in the controls we believe are necessary.  It also fits our natural risk-aversion. We can't help but focus on the downside of risk.

Most of us blithely assume that, once operational, the security controls are doing their thing: that may be a dangerous assumption, especially in the case of safety-, business- or mission-critical controls plus the foundational controls on which they depend (e.g. reliable authentication is a prerequisite for access control, and physical security underpins almost all other forms of control). 

So, on the security metrics dashboard, what's our equivalent of the "bulb test" when well-designed electro-mechanical equipment is powered up? How many of us have even considered building-in self-test functions and alarms for the failure of critical controls?

I could be wrong but I feel this may be an industry-wide blind spot with the exception of safety-critical controls, perhaps, and situations where security is designed and built in from scratch as an integral part of the architecture (implying a mature, professional approach to security engineering rather than the usual bolt-on security).


Mar 21, 2018

NBlog March 21 - down to Earth

Since "assurance" is a fairly obscure concept, April's awareness materials inevitably have to explain it in simple enough terms that people can grasp it, without glossing over things to such an extent that nothing matters, nothing registers.

Tricky that!

Harder still, our purpose for raising this at all is to emphasize the relevance of assurance to information security - another conceptual area that we're trying hard to make less obscure!

The approach we've come up with is to draw parallels between assurance for information security, and assurance for safety. Safety is clearly something that matters. People 'get it' without the need to spell it out in words of one syllabub. With just a gentle help, they understand why safety testing, for instance, is necessary, and why safety tags and certificates mean something worthwhile - valuable in fact ... and that gives us a link between assurance and business.

For awareness purposes, we'll be using bungy-jumping as a safety-, business- and assurance-related situation that catches attention and sparks imaginations. It's something risky that people can relate to, regardless of whether they have personally done it or not. You could say it is well-grounded. Aside from the emotional connection, it has the added bonus of striking images - great for seminar slides and to break up the written briefings.

We still face the challenge of linking from there across to information security, and that's what the bulk of the awareness materials address, covering assurance in the context of information risk, security, integrity, testing, auditing, trust and more - quite a swathe of relevant issues to discuss in fact. 

Mar 20, 2018

NBlog March 20½ - Facebook assures

Facebook is facing a crisis of confidence on stockmarkets already jittery about interest rates and over-priced tech stocks, thanks to a privacy breach with overtones of political interference:
"Facebook fell as much as 8.1 percent to $170.06 on Monday in New York, wiping out all of the year's gains so far. That marked the biggest intraday drop since August 2015. Facebook said Friday that the data mining company Cambridge Analytica improperly obtained data on some of its users, and that it had suspended Cambridge while it investigates. Facebook said the company obtained data from 270,000 people who downloaded a purported research app that was described as a personality test. The New York Times and the Guardian reported that Cambridge was able to tap the profiles of more than 50 million Facebook users without their permission. Facebook first learned of the breach more than two years ago but hadn't disclosed it. A British legislator said Facebook had misled officials while Senator Amy Klobuchar of Minnesota said Facebook CEO Mark Zuckerberg should testify before the Senate Judiciary Committee ... Daniel Ives, chief strategy officer and head of technology research for GBH Insights, said this is a crisis for Facebook, and it will have to work hard to reassure users, investors and governments."
[NZ Herald, 20th March 2018, emphasis added] 

Attempting to halt and ideally reverse the decline in the extent to which third-parties trust the organization following a major incident is tough, and expensive. Can anyone believe its claims and assurances in future? Will they inspire the same level of confidence that they might once have done? What additional hoops will they be expected to clear in future to reassure others? Will they ever rebuild their credibility and reputation, or is this incident going to haunt them in perpetuity? A lot depends on how the incident is handled.

Facebook and its management will, I guess, spend large to scrape through the crisis with the usual flurry of denials, excuses, explanations/justifications and apologies. Lawyers will profit. Heads may roll, and the suspended relationship with Cambridge Analytica will be 'strained', perhaps to breaking point.

But what of the ongoing relationship with "users, investors and governments"? I wonder if Facebook had a strategy in place to 'reassure' them following a privacy breach or some other major incident? Does it have a business continuity plan for this eventuality? We will see how it plays out over the next few days and weeks, perhaps months given the political and regulatory ramifications.

I'm looking forward to finding out, in due course, whether the controls imposed by GDPR would have helped avoid or mitigate this incident. It's an obvious line of inquiry. The first hints have already emerged with claims that it wasn't a theft of personal information since users gave their permission to share it - but was that a fully-informed free choice, or were they hoodwinked and pressured into it? 

Meanwhile I'm contemplating the lessons to be learned, and wondering if we might use this incident as well as, or instead of, dieselgate as a case study for April's assurance module.

NBlog March 20 - a critique of CIS netsec metrics


Perusing a CIS paper on metrics for their newly-updated recommended network security controls (version 7), several things strike me all at once, a veritable rash of issues.

Before reading on, please at least take a quick squint at the CIS paper. See what you see. Think what you think. You'll get more out of this blog piece if you've done your homework first. You may well disagree with me, and we can talk about that. That way, I'll get more out of this blog piece too!





[Pause while you browse the CIS paper on metrics]






[Further pause while you get your thoughts in order]





OK, here's my take on it:
  1. The recommended controls are numerous, specific and detailed cybersecurity stuff, hence the corresponding metrics are equally granular since the CIS team has evidently decided that each control should be measured individually ... whereas, in practice, I'd be more inclined to take the metrics up a level or three since my main interest in metrics is to make decisions in order to manage things, not to do them nor to 'prove' that someone is doing something. I'm not entirely sure even the network security wonks would welcome or appreciate such detailed metrics: they should already know how they are doing, pretty much, without the need to measure and prove it (to themselves!). Management, on the other hand, could do with something more than just the tech guys telling them "Oh it's all OK!  We're using the CIS guidance!  Nothing to see here - move along!" or "Of course we are terribly insecure: we've told you a million times we need more resources!". I contend that overview/status or maturity metrics would be far more useful for management. [I'll circle back to that point at the end. Skip the rest if this is all too much.]

  2. I guess if all the individual metrics were generated, it would be possible to generate an overall score simply by averaging them (taking the mean and maybe the variance too since that relates to consistency). That could be used as a crude indication of the status, and a lever to drive up implementation, but it would be better to at least group the detailed metrics into categories (perhaps relating to the categories of control) and report each category separately, providing a better indication of where the strengths and weaknesses lie. However, I'm still troubled by the first part: "If all the individual metrics were generated" implies a very tedious and potentially quite costly measurement process. Someone familiar with the organization's network security controls (a competent IT auditor, for instance, or consultant - a reasonably independent, unbiased, diligent and intelligent person anyway) ought to be able to identify the main strengths and weaknesses directly, categorize them, measure and report them, and offer some suggestions on how to address the issues, without the tedium. I figure it's better for the network security pros to secure the network than to generate reams of metrics of dubious value. [More on this below]

  3. I'm sure most of us would challenge at least some of the CIS recommended controls: they mean well but there are situations where the controls won't work out in practice, or they go too far or not far enough, or there are other approaches not considered, or the wording isn't right, or ... well, let's just say there are lots of potential issues way down there in the weeds, and that's bound to be an issue with such prescriptive, detailed, "do this to be secure" check-the-box approaches (I know, I know, I'm exaggerating for effect). Plucking but one example from my own specialism, control 17.4 says "Update Awareness Content Frequently - Ensure that the organization's security awareness program is updated frequently (at least annually) to address new technologies, threats, standards and business requirements."  Updating awareness and training program content to reflect the ever-changing information risk landscape is good practice, I agree, but annually is definitely not, especially if that also implies that it is only necessary to check for changes in the information risks on an annual basis. Hello! Wakey wakey! There is new stuff happening every hour, every day, certainly every few weeks, with potentially significant implications that ought to be identified, evaluated and appropriately responded-to, promptly. Annual updates are way too slow, a long way short of "frequent" to use their word. Furthermore, the metric for 17.4 is equally misleading: "Has the organization ensured that the organization's security awareness program is updated frequently (at least annually) to address new technologies, threats, standards and business requirements: yes/no?"  Using their metric, any sort of 'update' to the awareness program that happens just once a year justifies answering yes - ticking the box - but to me (as an awareness specialist) that situation would be woefully inadequate, indicative of an organization that patently does not understand the purpose and value of security awareness and training. In that specific example, I would suggest that the frequency of meaningful reviews and updates to the information risk profile and the awareness and training program would be a much more useful metric - two in fact since each aspect can be measured separately and they may not align (hinting at a third metric!). 

  4. The underlying problem is that we could have much the same discussion on almost every control and metric in their list. How many are there in total?  Over 100, so that's roughly 100 discussions. Pains will be taken! Set aside a good few hours for that, easily half to a whole a day. You could argue that we would end up with a much better appreciation of the controls and the metrics ... but I would counter that there are better ways to figure out worthwhile metrics than to assess/measure and report the implementation status of every individual control. That half a day or so could be used more productively.

  5. My suggestion to use 'frequency of risk and awareness updates' reminds me of the concern you raised, Walt.  Binary metrics are crude while analog metrics are more indicative of the true status, particularly in boundary cases where a straight yes or no does not tell the whole story, and can be quite misleading (e.g. as I indicated above). Binary metrics and crude checklists are especially problematic if the metrician has flesh in the game (which would be true if the CIS network security metrics were being measured and reported by network security pros), and if the outcome of the measurement may reflect badly or well on them personally. The correct answer is of course "Yes" if the situation clearly and completely falls into the "Yes" criterion, but what if the situation is not quite so clear-cut? What if the appropriate, honest answer would be "Mostly yes, but slightly no - there are some issues in this area"? Guess what: if "Yes" leads to fame and fortune, then "No" doesn't even get a look-in! In extreme cases, people have been known to ignore all the "No" situations, picking out a single "Yes" example and using that exception, that outlier, to justify ticking the "Yes" box. This is of course an information risk, a measurement bias, potentially a significant concern depending on how the metrics are going to be used. The recipient and user of the metrics can counter the bias to some extent if they are aware of it and so inclined, but then we're really no better off than if they just discussed and assessed the situation without binary metrics. If they are unaware of the bias and unwisely trusting of the metric, or if they too are biased (e.g. an IT manager reporting to the Exec Team on the network security status, using the 'facts' reported up the line by the network security team as their get-out-of-jail-free card - plausible deniability if it turns out to be a tissue of lies), then all bets are off. There are situations where such biased metrics can be totally counterproductive - leaving us worse off than if the metrics did not exist (consider the VW emissions-testing scandal, plucking a random example out of the air, one that I brought up yesterday in relation to assurance).

  6. Furthermore, I have concerns about the CIS version of an analog metric in this document. Someone at CIS has clearly been on the 'six sigma' training, swallowed the Cool-aid, and directly transferred the concept to all the analog metrics, with no apparent effort to adapt to the situation. Every CIS analog metric in the paper has the identical form with identical criteria for the 6 levels:  69% or less; 31% or less; 6.7% or less; 0.62% or less; 0.023% or less; 0.00034% or less. That categorization or gradation really doesn't make a lot of sense in every case, leading to inconsistencies from one metric or one control to the next. I challenge anyone to determine and prove the distinction between the upper three values on their scale for any real-world network security measurement in the table, at least not without further measurement data (which sort of defeats the purpose) ... so despite the appearance of scientific rigour, the measurement values are at least partially arbitrary and subjective anyway. Trying to shoe-horn the measurement of a fair variety of network security control implementation statuses into the same awkward set of values is not helpful. For me, it betrays a lack of fundamental understanding of six-sigma, continuous improvement and process maturity.  Frankly, it's a mess.

  7. Returning to the idea of averaging scores to generate overall ratings, that approach is technically invalid if the individual values being averaged are not equivalent - which they aren't for the reasons given above. Seems to me The Big Thing that's missing is some appreciation and recognition of the differing importance or value of each control. If all the controls were weighted, perhaps ranked or at least categorized (e.g. vital, important, recommended, suggested, optional), there would be a better basis for generating an overall or section-by-section score. [In fact, the process of determining the weightings or ranking or categorization would itself generate valuable insight ... a bonus outcome from designing better security metrics! The CIS controls are supposedly 'prioritized' so it's a shame that approach didn't filter down to the metrics paper.] One thing we could do, for example, is ignore all except the vital controls on a first pass: get those properly specified, fully implemented, operational, actively managed and maintained would be an excellent starting point for an organization that has no clue about what it ought to be doing in this space. Next pass, add-in the important controls. Lather-wash-rinse-repeat ...

Overall the CIS paper, and bottom-up metrics in general, generate plenty of data but precious little insight - quantity not quality.

Earlier I hinted that as well as their use for decision-making and managing stuff, metrics are sometimes valued as a way of ducking accountability and reinforcing biases. I trust anyone reading this blog regularly knows where I stand on that.  Integrity is a core value. 'Nuff said.

If I were asked to design a set of network security metrics, I would much prefer a top-down approach (e.g. the goal-question-metric method favoured by Lance Hayden, or a process/organizational maturity metric of my own invention), either instead of, or as well as, the bottom-up control implementation status approach and other information sources (e.g. there is likely to be a fast-flowing stream of measurement data from assorted network security boxes and processes). 

Perhaps these alternatives are complementary? I guess it depends on how they are used - not just how they are meant or designed to be used, but what actually happens in practice: any metric (even a good one, carefully designed, competently measured, analyzed and reported with integrity) can be plucked out of context to take on a life of its own as people clutch at data straws that reinforce their own biases and push their own agendas. See any marketing-led "survey" for clear evidence of that! 


Mar 19, 2018

NBlog March 19 - a thinking day

Today was a thinking day - time away from the office doing Other Stuff meant my reluctant separation from the keyboard and a chance to mull over the awareness materials for April, free of distractions.

I returned sufficiently refreshed to catch up with emails and press ahead with the writing, and inspired enough to come up with this little gem:


I say 'gem' because that single (albeit convoluted) statement helps us explain and focus the awareness module.  We will explain assurance in terms of confidence, integrity, trust, proof etc. and discuss the activities that get us to that happy place, or not as the case may be. 

Discovering any problems that need to be addressed is an important and obvious part of various forms of testing, but so too is giving the all-clear. Gaining assurance, either way, is the real goal, supporting information risk management: if you discover, later, that the testing was inept, inadequate, biased, skipped or otherwise lame, the whole thing is devalued, and worse still the practice of testing is undermined as an assurance measure. 

Take for example dieselgate - the diesel emissions-testing scandal involving Volkwagen vehicles: in essence, some bright spark at VW allegedly came up with a cunning scheme to defeat the emissions testing lab by switching the vehicle's computer control unit to a special mode when it detected the conditions indicating a test in progress, reverting to a less environmentally-friendly mode for normal driving. Ethics and legality aside, the scandal brought a measure of doubt onto the testing regime, and yet the trick was (eventually) discovered and the perpetrators uncloaked, bringing greater disrepute to VW. 

Hmmm, that little story might make an interesting case study scenario for the module. If it makes people think and talk animatedly about the information risk aspects arising (assurance in particular but there are other relevant issues too), that's a big awareness win right there. Job's a good 'un. Thank you and good night.

Mar 18, 2018

NBlog March 18 - building a sausage machine

We've been engaged to write a series of awareness materials on a variety of information security topics - a specific type of awareness product that we haven't produced before. So the initial part of the assignment is to clarify what the client wants, come up with and talk through our options, and draft the first one. 

That's my weekend spoken for!

Once the first one is discussed, revised and agreed, stage two will be to refine the production process so future products will be easier and quicker to generate, better for the client and better for us.

Like sausages. We're building a sausage machine. We'll plug in a topic, turn the handle and extrude a perfectly-formed sausage every time.

Sounds fine in theory but on past experience that's not quite how it will work out, for two key reasons:
  1. Since the topics vary, the content of the awareness product will vary, naturally ... but so too may the structure and perhaps the writing style. Awareness content on, say, viruses or passwords is conceptually and practically a bit different to that on, say, privacy or cybersecurity. The breadth and depth of cover affects how we write, so the machine needs some 'give'. It can't be too rigid.
  2. As the string of sausages gets ever longer, we will continually refine the machine and think up new wrinkles ... which may even mean going back and reforming some of the early products. It's possible an entirely new approach may emerge as we progress, but more likely it will evolve and mature gradually. What starts out producing a string of plain beef sausages may end up churning out Moroccan lamb and mint - still definitely sausages but different flavours. 
Knowing that, now, the sausage machine has to be capable of being modified to some extent in the future, within certain constraints since the customer expects a reasonably consistent product. Some features being designed into the process today will remain in a month or three, while others will evaporate to be replaced by others and we're cool with that. Hopefully the client will be too!

In more practical terms, the sausage machine itself consists of a document template with defined styles in MS Word. The template and styles can be tweaked as we go along. While the production process is presently undocumented, it is sufficiently close to our normal everyday activities that there's really no need to formalise it: we are well practiced at this stuff, running on auto. It helps that the template and styles are self-evident.

If you are 'doing' awareness with a planned series of awareness items or activities, I'd encourage you to adopt a similar, structured and planned sausage-machine approach, investing some effort up front into designing the production machinery and process. It's an obvious way to gain consistency and take advantage of continuous improvement. Once the production line is running sweetly, it lets you focus on the meat of the topic - the creative content - rather than on the production process. While it may need care and maintenance from time to time, the mechanistic process makes it easier to keep on going.

Mar 17, 2018

NBlog March 17 - assurance functions

Of all the typical corporate departments or functions or teams, which have an assurance role?
  • Internal Audit - audits are all about gaining and providing assurance;
  • Quality Assurance plus related functions such as Product Assurance, Quality Control, Testing and Final Inspection, Statistical Process Control and others;
  • Risk Management - because assurance reduces uncertainty and hence risk;
  • IT, Information Management, Information Risk and Security Management etc. - for example, ensuring the integrity of information increases assurance, and software quality assurance is a big issue;
  • Information Security Management - which is of course why this is an information security awareness topic;
  • Business Continuity Management - who need assurance on everything business-critical;
  • Health and Safety - who need assurance on everything safety-critical;
  • Production/Operations - who use QA, SPC and many other techniques to ensure the quality and reliability of production methods, processes and products;
  • Sales and Marketing who seek to assure and reassure prospects and customers that the organization is a quality outfit producing reliable, high-quality products, building trust in the brands and maintaining a strong reputation;
  • Procurement - who need assurance about the raw materials, goods and services offered and provided to the organization, and about the suppliers in a more general way (e.g. will they deliver orders within specification, on time, reliably? Will the relationship and transactions be worry-free?);
  • Finance - who absolutely need to ensure the integrity of financial information, and who perform numerous assurance measures to achieve and guarantee that;
  • Human Resources - who seek to reassure management that the organization is finding and recruiting the best candidates and making the best of its people; 
  • Legal/Compliance - need to be sure that the organization complies sufficiently with external obligations to avoid penalties, and that internal obligations are sufficiently fulfilled to achieve business advantage;
  • Every other department, function or team that depends on information, or that delivers important information to others ... in other words, everyone;
  • Management as a whole - for instance governance and oversight are both strongly assurance-related, and most metrics are designed to assure recipients that everything is on-track, going to plan, working well etc.;
  • The workforce as a whole - since everyone needs to know they can depend on their jobs and livelihoods.
Looking further afield, outside the organization, assurance is also of concern to third-parties such as:
  • External Audit and similar external inspection functions such as certification auditors for ISO27k, PCI;
  • Customers - who need to know the products they are buying will deliver the benefits promised and anticipated;
  • Suppliers - who need to know they will be paid and would like to rely on future business;
  • Owners of the organization, with an obvious interest in its health and prosperity;
  • Various authorities, the tax man for instance;
  • Society at large - since discovering something unexpected and untoward about any organization is generally shocking.
So it turns out that assurance is a widespread issue, stretching well beyond the obvious assurance-related functions such as Audit and QA ... which makes it a surprisingly strong candidate for security awareness purposes. Although we haven't produced an assurance awareness module before, we've covered integrity, audit, oversight and other things. This time around it's an opportunity to focus-in on and explore the assurance element in more depth, while once again reinforcing the core security awareness messages on integrity, trust, risk, control etc.

The lists of corporate functions and third-parties above will make its way into the train-the-trainer guide in April's awareness module, encouraging the security awareness people to figure out who they might contact within the organization for help with their awareness efforts, and for genuine examples, incidents or business situations where assurance is crucial. The external interested parties might also be of interest: just imagine the awareness impact of an important customer representative talking honestly about the value of being able to trust in and depend upon the organization, and the negative impact of quality or other issues.

Mar 16, 2018

NBlog March 16 - word games

The assurance word-art tick (or boot?) that we created and blogged about a few days ago is still inspiring us. In particular, some assurance-related words hint at slightly different aspects of the same core concept:
  • Assure
  • Assurance
  • Assured
  • Assuredly
  • Ensure
  • Ensured
  • Insure
  • Insurance
  • Reassure
Along with the tongue-in-cheek terms 'man-sure' and 'lady-sure', they are all based on 'sure', being a statement of certainty and confidence.

Insure is interesting: in American English, I believe it means the same as ensure in the Queen's English (i.e. being certain of something), but in the Queen's English, insure only relates to the practice of insurance, when some third-party offers indemnity against particular risks.

Assured, ensured and insured are not merely the past tenses of the respective verbs, but have slightly different implications or meanings:
  • If someone is assured of something, they have somehow been convinced and accept it as true. They internalize and no longer question or doubt their belief to the same extent as if they were not assured of it. They rest-assured, generally as a result of a third-party providing them the assurance if they don't convince themselves;
  • Someone who ensured something made certain it was so or at least made the effort to do so (they don't always succeed!). This often means passing responsibility to a third-party who they believe will do as required;
  • In the Queen's English, a company that insured something provided the indemnity (insurance cover) to whoever had it insured. In American English, the previous bullet applies, presumably.
Reassure is different again, with connotations of comfort and relief when doubt is dispelled.

The point of this ramble (finally!) is that there are some interesting subtleties to assurance that we can use in the awareness and training materials to get people thinking about it and maybe re-evaluating their own beliefs. The words aren't the intriguing bit so much as the concept, but the jumble of words is a way to get the brain cells in gear.

Mar 15, 2018

NBlog March 15 - scheduling audits

One type of assurance is audit, hence auditing and IT auditing in particular is very much in-scope for our next security awareness module.

By coincidence, yesterday on the ISO27k Forum, the topic of 'security audit schedules' came up.

An audit schedule is a schedule of audits, in simple terms a diary sheet listing the audits you are planning to do. The usual way to prepare an audit schedule is risk-based and resource-constrained. Here's an outline (!) of the planning process to set you thinking, with a sprinkling of Hinson tips:

  1. Figure out all the things that might be worth auditing within your scope (the 'audit universe') and list them out. Brainstorm (individually and if you can with a small group of brainstormers), look at the ISMS scope, look for problem areas and concerns, look at incident records and findings from previous audits, reviews and other things. Mind map if that helps ... then write them all down into a linear list.
  2. Assess the associated information risks, at a high level, to rank the rough list of potential audits by risk - riskiest areas at the top (roughly at first -'high/medium/low' risk categories would probably do - not least because until the audit work commences, it's hard to know what the risks really are). 
  3. Guess how much time and effort each audit would take (roughly at first -'big/medium/small categories would probably do - again, this will change in practice but you have to start your journey of discovery with a first step).
  4. In conjunction with other colleagues, meddle around with the wording and purposes of the potential audits, taking account of the business value (e.g. particular audits on the list that would be fantastic 'must-do' audits vs audits that would be extraordinarily difficult or pointless with little prospect of achieving real change). If it helps, split up audits that are too big to handle, and combine or blend-in tiddlers that are hardly worth running separately. Make notes on any fixed constraints (e.g. parts of the business cycle when audits would be needed, or would be problematic; and dependencies such as pre/prep-work audits to be followed by in-depth audits to explore problem areas found earlier, plus audits that are linked to IT system/service implementations, mergers, compliance deadlines etc.).
  5. Sketch out the scopes and purposes of the audits, outline the risks they address, scribble notes to be used by the auditors and auditee/clients when it comes to detailed audit planning and authorization of individual audits.
  6. Starting at the top of the list, add a column for a a cumulative running total of the resources needed (e.g. with an estimated 20 man-days required for audit 1, 10 man-days for audit 2, 25 man-days for audit 3, the cumulative resource column shows 20 then 30 then 55 man-days ...).
  7. If you have an audit person or team already assigned, figure out how many man-days of audit resources you have in the year/s ahead. Hinson tip: be conservative. It's never a problem to find more work to do, but it's always a problem to try to squeeze too much out of the person/team so that tempers fray and quality suffers. Be sure to leave some unassigned resources to cope with 'special investigations' (e.g. fraud work), time for audit planning and admin, time for team-building, training and personal development, and (trust me) plenty of contingency for jobs that run over and extra must-do jobs that materialize out of nowhere during the planned period. Draw a pencil line on the list under the audits you can complete with the available resources, and those you probably cannot do. Add a grey area (above the line!) to show that there is significant uncertainty in the plan. Tidy-up the rough plan so it is not quite so rough - presentable even.
  8. Present and discuss the outline plan with senior management. Use your prep-work and notes to outline and explain/justify the audit jobs towards the top of the list, or any stand-outs of particular note. Impress on them that this is not some random noise but there has been thought put into it. Negotiate the contents (audits planned, scopes and purposes, resources needed, resources available, contingency remaining) until you reach a tentative settlement, firming-up your audit schedule. If they insist on moving your pencil line down the list to complete more audits, then insist on the additional resources necessary (more auditors - employees or contractors or secondees) ... and preferably put it down in writing (make sure it is minuted)! Hinson tip: although there will undoubtedly be pressure, stick to your guns on the man-days you estimated are required for each audit. Do not arbitrarily cut back the resources for audits unless they agree to reduce the scope of work accordingly ("minute that, please"): do not allow the quality of audit work to be compromised - together you are investing in assurance, and the reputation of the audit function is an extremely important part of that. Hinson tip: you have some leeway on the timing, title and detailed scope of each audit, but do not chop planned audits from the list without putting up a spirited defense. This is where your prep-work and notes come into play. Play hard-ball if a manager seems determined to chop out an audit in their area: why is that?Do they have something to hide? Or are there genuine business reasons that mean the planned audit would not help the organization? Under extreme pressure to chop a legitimate audit off the plan, 'take the discussion off-line' and work privately with the manager concerned, plus their manager, to evaluate the situation and reassess the risks - or perhaps ask the management team as a whole to make the decision there and then. As a last resort, try to convince the CEO or Chairman of the Board that, in your professional judgment, they need additional assurance in that specific area. And if the final answer is "Chop it!", get that in writing.
  9. Turn the list into a schedule that works, in theory. This step is tricky as it involves juggling audits, resources, objectives, dependencies and constraints (e.g. an internal audit to make sure your ISMS is running sweetly before a scheduled external ISMS certification or surveillance audit obviously has a fixed completion date, so work back from there ... and add slack time/contingency too). Involve the team and colleagues if you can. Hinson tip: version control or date the plan.
  10. Once firmed-up, have the finalized plan formally approved by senior management e.g. the CEO, CISO, CIO, President or Chairman of the Board. Don't neglect this simple but critical step.
  11. Build and brief the team and run the plan. Make it happen. Do and manage stuff. Deal with all the wrinkles that come up In Real Life. Remind auditors and auditees that senior management agreed and formally approved the plan and the resources (that's why step 10 is crucial). Motivate, lead, encourage. Jiggle resources and scopes to make the best of it. Adjust the plan and audits as necessary ... and keep notes for the next round of planning or re-planning. Do your level best not to have to go back to senior management with a request for more resources or an explanation about why you cannot possibly complete the approved plan. Hinson tip: use your contingency sparingly throughout the entire period and monitor it carefully. If a quarter of the plan is complete but you've used half your contingency already, we have a problem Houston.

If that's all too much for you and way over the top, then a much simpler starting point is to map-out the audits you think you will be doing on a wall-planner or the year-to-a-view page in your desk diary. Hinson tip: use dry-wipe erasable markers or pencil!

It gets easier and better with practice, like anything really. Except finding things in the fridge: that's always impossible, for men.

[We will turn that into some sort of pro briefing, procedure or checklist for the awareness module, with a process diagram, a succinct summary and careful layout/formatting to make it more readable - e.g. isolating the tips as side notes in text boxes in a contrasting color. Easy when you know how! We're already working on similar guidance for other types of assurance work, such as testing.]

Mar 13, 2018

NBlog March 13 - normal service ...


... will be resumed, soon. We've been slaving away on a side project, putting things in place, setting things up, trying things out. It's not quite ready to release yet - more tweaking required, more polishing, lots more standing back and admiring from a distance - but it's close.

Mar 9, 2018

NBlog March 9 - word cloud creativity

Yesterday I wrote about mind mapping. The tick image above is another creative technique we use to both explore and express the awareness topic.

To generate a word cloud, we start by compiling a list of words relating in some way to the area. Two key sources of inspiration are: 
  1. The background research we've been doing over the past couple of months - lots of Googling, reading and contemplating; and 
  2. Our extensive information risk and security glossary, a working document of 300-odd pages, systematically reviewed and updated every month and included in the NoticeBored awareness modules
Two specific terms in that word cloud amuse me: "Man-sure" and "Lady-sure" hint about the different ways people think about things. When a lay person (man or woman!) says "I'm sure", they may be quite uncertain in fact. They are usually expressing a subjective opinion, an interpretation or belief with little substance, no objective, factual evidence. It can easily be wrong and misleading. When a male or female expert or scientist, on the other hand, says "I'm sure", their opinion typically stems from experience, and carries more weight. It is less likely to be wrong, and hence provides greater assurance. This relates to integrity, a core part of information security. It's not literally about sex.

Aside from integrity and assurance, we have defined more than 2,000 terms-of-art in the glossary, with key words in the definitions hyperlinked to the corresponding glossary entries. I use it like a thesaurus, following a train of thought that meanders through the document, sometimes spinning off at a tangent but always triggering fresh ideas. Updating the glossary is painstaking yet creative at the same time.

Getting back to the word cloud, we squeeze extra value from the list of words by generating puzzles for the modules. Our word-searches are grids of letters that spell out the words in various directions. Finding the words 'hidden' in the grid is an interesting, fun challenge in itself, and also a learning process since the words all relate to the chosen topic.

There are other aspects to the word cloud graphic:
  • All the words are relevant to the topic, to some extent;
  • More significant words are emphasized by size and colour;
  • Insignificant words are tiny and intentionally quite hard to read, fading into the distance and hinting that there are yet more just out of sight;
  • The graphical shape of the cloud (the mask) relates to the topic. It is meant to be a tick in this particular example, although it also resembles an old boot! An accompanying assurance word cloud in the shape of a cross hopefully clarifies the intention;
  • The graphic is visually appealing or intriguing. It catches the eye and stimulates people to think about the topic - an awareness win in its own right. We use word clouds, diagrams and other graphics to illustrate other awareness materials and break up the text.

Mar 8, 2018

NBlog March 8 - brainstorming awareness ideas

At this early stage of the month, although we have some ideas in mind for the content of the next awareness module, they are unstructured. We need to clarify the scope and purpose of the module, developing themes to pull things together and 'tell the story'.

Mind mapping is our favourite technique for that: we sketch out the topic area on a single sheet starting from a central topic word ("Assurance" this month) and arranging a few major themes around it, connecting the words to show their relationships. 

On paper, it starts out simply like this with 3 key themes:



Then we expand on those initial themes with further details ...


... and keep going until we run short of inspiration and decide to move ahead to the next stage ...



On paper, with my handwriting, the rough diagram is quite scrappy but that's something we can work on later, normally by redrawing the mind map in Microsoft Visio. In Visio, it will be easy to amend or adjust things, for example rewording the nodes, moving and linking them, changing their sizes and using colour. The whole thing will end up looking neat and tidy - literally presentable in fact as we use mind maps in the seminar slide decks and briefing papers. At this stage, though, we are much more interested in the themes, concepts and linkages than the appearance. The roughness is strangely stimulating.

Stepping back from the page, we clearly have quite a bit of stuff to say top right concerning "Proof" with less under "Confidence" and "Trust". Maybe that's just how it is, or maybe we need to do something, perhaps splitting up "Proof" into distinct (but linked) themes, and exploring the other aspects further. The purpose or reason for gaining assurance, for instance, is implied by the main themes but it might be worth drawing out explicitly. Why is assurance necessary? What makes it valuable? What does it give us (or what issues does its absence cause)? Who needs it, when and how? 

Those thoughts and questions emerged with and from the diagram - the creative process in action.

Something equally magical happened as the mind map sprung to life on the page. Using "Analysis" to link "Proof" with "Confidence" triggered the thought of examining the process of gaining assurance, hence we scribbled "Process" in the handy space below trust ... but in fact the process relates to all parts of the diagram. It might even be a fourth key theme, or a separate layer.

In this particular example, I've only used words and lines. Sometimes it helps to draw little pictures, icons and doodles usually, reminders of thoughts that occurred as the diagram was being created. A simple example is to add stars and underlines to emphasize key elements. The double-headed arrow between "Certainty" and "Doubt" reminds me to think about degrees and confidence limits, and metrics. Oh and the philosophers in the Hitchhiker's Guide to the Galaxy.

Regardless of when it gets translated into Visio, I'll keep the original rough diagram beside me on the desk for the hours and days ahead in case I think of something else, or to reorient my mind if (when!) I meander too far or get lost in the weeds.