Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Jan 23, 2019

NBlog Jan 23 - infosec policies rarer than breaches

I'm in shock.  While studying a security survey report, my eye was caught by the title page:


Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies". That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??).

Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report (e.g. a data, analytical or printing error), or in the associated questions (e.g. the questions may have been badly phrased) or in my understanding of the finding as presented. In my limited first-hand experience with rather less than ~2,000 UK organizations, most have information security-related policies in place today ... but perhaps that's exactly the point: they may have 'infosec policies' but not 'cybersec policies' as such. Were the survey questions in this area worded too explicitly or interpreted too precisely? Was 'cyber security' even defined for respondents, or 'policy' for that matter? Or is it that, being an infosec professional, I'm more likely to interact with organizations that have a clue about infosec, hence my sample is biased?

Thankfully, a little digging led me to the excellent technical annex with very useful  details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession. 

Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access." 
Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in the NoticeBored glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.

"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!

Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.

I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?

Blog comments are open. Feedback is always welcome. 

Jan 21, 2019

NBlog Jan 21 - computer errors

Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken:
  • Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and  architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification;
  • Bugs are coding mistakes e.g. the Pentium FDIV bug affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated; 
  • Config and management errors are mistakes in the configuration and management of a system e.g. disabling controls such as antivirus, backups and firewalls, or neglecting to patch systems to fix known issues;
  • Typos are mistakes in the data entered by users including those who program and administer the systems;
  • Further errors are associated with the use of computers, computer data and outputs e.g. misinterpreting reports, inappropriately disclosing, releasing or allowing access to sensitive data, misusing computers that are unsuited for the particular purposes, and failing to control IT changes.

Set against that broad backdrop, do computers as such ever make mistakes? Here are some possible examples of true "computer errors":
  • Physical phenomena such as noise on communications links and power supplies frequently cause errors, the vast majority of which are automatically controlled against (e.g. detected and corrected using Cyclic Redundancy Checks) ... but some slip through due to limitations in the controls. These could also be categorized as physical incidents and inherent limitations of information theory, while limited controls are, again, largely the result of human errors;
  • Just like people, computers are subject to rounding errors, and the mathematical principles that underpin statistics apply equally to computers, calculators and people. Fully half of all computers make more than the median number of errors!;
  • Artificial intelligence systems can be misled by available information. They are almost as vulnerable to learning inappropriate rules and drawing false conclusions as we humans are. It could be argued that these are not even mistakes, however, since there are complex but mechanistic relationships between their inputs and outputs;
  • Computers are almost as vulnerable as us to errors in ill-defined areas such as language and subjectivity in general - but again it could be argued that these aren't even errors. Personally, I think people are wrong to use SMS/TXT  shortcuts and homonyms in email, and by implication email systems are wrong in neither expanding nor correcting them for me. I no U may nt accpt tht. 

Jan 20, 2019

NBlog Jan 20 - human error stats

Within our next awareness module on "Mistakes", we would quite like to using some headline statistics to emphasize the importance of human error in information security, illustrating and informing.

So what numbers should we use? 

Finding numbers is the easy part - all it takes is a simple Google search. However, it soon becomes apparent that many of the numbers in circulation are worthless. So far, I've seen figures ranging from 30 to 90% for the proportion of incidents caused by human error, and I've little reason to trust those limits!

Not surprisingly the approach favored by marketers is to pick the most dramatic figure supporting whatever it is they are promoting. Many such figures appear either to have been plucked out of thin air (with little if any detail about the survey methods) or generated by nonscientific studies deliberately constructed to support the forgone conclusion. I imagine "What do you want us to prove?" is one of the most important questions some market survey companies ask of their clients.

To make matters worse, there is a further systemic bias towards large numbers. I hinted at this above when I mentioned 'emphasize the importance' using 'headline statistics': headlines sell, hence eye candy is the name of the game. If a survey finds 51% of something, it doesn't take much for that to become "more than half" then "a majority", then "most", then, well, whatever. As these little nuggets of information pass through the Net, the language becomes ever more dramatic and eye-catching at each step. It's a ratchet effect that quite often ends up in "infographics": not only are the numbers themselves dubious but they are deliberately visually overemphasized. Impact trumps fact. 

So long as there is or was once (allegedly) a grain of fact in there, proponents claim to be speaking The Truth which brings up another factor: the credibility of the information sources. Through bitter experience over several years, I am so cynical about one particular highly self-promotional market survey company that I simply distrust and ignore anything they claim: that simple filter (OK prejudice!) knocks out about one third of the statistics in circulation. Tightening my filter (narrowing my blinkers) further to discount other commercial/vendor-sponsored surveyors discounts another third. At a stroke, I've substantially reduced the number of figures under consideration.

Focusing now on the remainder, it takes effort to evaluate the statistics. Comparing and contrasting different studies, for instance, is tricky since they use different methods and samples (usually hard to determine), and often ambiguous wording. "Cyber" and "breach" are common examples. What exactly is "cybersecurity" or a "cyber threat"? You tell me! To some, "breach" implies "privacy breach" or "breach of the defensive controls" or "breach of the defensive perimeter", while to others it implies "incidents with a deliberate cause" ... which would exclude errors.

For example, the Cyber SecurityBreaches Survey 2018 tells us: 
"It is important to note that the survey specifically covers breaches or attacks, so figures reported here also include cyber security attacks that did not necessarily get past an organisation’s defences (but attempted to do so)."
Some hours after setting out to locate a few credible statistics for awareness purposes, I'm on the point of either giving up on my quest, choosing between several remaining options (perhaps the 'least bad'), lamely offering a range of values (hopefully not as broad as 30 to 90%!) ... or taking a different route to our goal. 

It occurs to me that the situation I'm describing illustrates the very issue of human error quite nicely. I could so easily have gone with that 90% figure, perhaps becoming "almost all" or even "all". I'm not joking: there is a strong case to argue that human failings are the root cause of all our incidents. But to misuse the statistics in that way, without explanation, would have been a mistake.



Jan 17, 2019

NBlog Jan 17 - another day, another breach



https://haveibeenpwned.com/ kindly emailed me today with the news that my email credentials are among the 773 million disclosed in “Collection #1”.  Thanks Troy Hunt!

My email address, name and a whole bunch of other stuff about me is public knowledge so disclosure of that is no issue for me. I hope the password is an old one no longer in use. Unfortunately, though for good reasons, haveibeenpwned won’t disclose the passwords so I can’t tell directly which password was compromised … but I can easily enough change my password now so I have done, just in case.

I went through the tedious exercise of double-checking that all my hundreds of passwords are long, complex and unique some time ago – not too hard thanks to using a good password manager. [And, yes, I do appreciate that I am vulnerable to flaws, bugs, config errors and inept use of the password manager but I'm happy that it is relatively, not absolutely, secure. There are other information risks that give me more concern.]

If you haven’t done that yet, take this latest incident as a prompt. Don't wait for the next one. 

Email compromises are pernicious. Aside from whatever salacious content there might be on my email account, most sites and apps now use email for password changes (and it’s often a fallback if multifactor authentication fails) so an email compromise may lead on to others, even if we use strong, unique passwords everywhere.

Jan 15, 2019

NBlog Jan 14 - mistaken awareness


Our next security awareness and training module for February concerns human error. "Mistakes" is its catchy title but what will it actually cover? What is its purpose? Where is it heading? 

[Scratches head, gazes vacantly into the distance]

Scoping any module draws on:
  • The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website;
  • Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February;
  • Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw;
  • Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.g. compliance drivers;
  • General knowledge and experience in this area, including our understanding of good practices ... which reminds me to check the ISO27k and other standards for guidance and of course Google, an excellent way to dig out potentially helpful advice, current thinking in this area plus news of recent, public incidents involving human error;
  • Shallow and deep thought, day and night-dreaming, doodling, occasional caffeine-fueled bouts of mind-mapping, magic crystals and witchcraft a.k.a. creative thinking.

Scoping the module is not a discrete one-off event, rather we spiral-in on the final scope during the course of researching, designing, developing and finalizing the materials. Astute readers might have noticed this happen before, past modules sometimes changing direction and titles in the course of production. Maybe the planned scope turned out to be too ambitious or for that matter too limiting, too dull and boring for our demanding audiences, or indeed for us. Some topics are more inspiring than others.

So, back to "Mistakes": what will the NoticeBored module cover? What we have roughly in mind at this point is: human error, computer error, bugs and flaws, data-entry errors and GIGO, forced and unforced accidents, errors of commission and omission. Little, medium and massive errors, plus those that change. Errors that are are immediately and painfully obvious to all concerned, plus those that lurk quietly in the shadows, perhaps forever unrecognized as such. Error prevention, detection and correction. Failures of all sizes and kinds, including failures of controls to prevent, mitigate, detect and recover from incidents. Conceptual and practical errors. Strategic, tactical and operational errors, particularly mistaken assumptions, poor judgement and inept decision making (the perils of management foresight given incomplete knowledge and imperfect information). Mistakes by various third parties (customers, suppliers, partners, authorities, regulators, advisers, investors, other stakeholders, journalists, social media wags, the Great Unwashed ...) as well as by management and staff. Cascading effects due to clusters and dependencies, some of which are unappreciated until little mistakes lead to serious incidents.

Hmmm, that's more than enough already, if an unsightly jumble!

Talking of incidents, we've started work on a brand new awareness module due for April about incident detection, hence we won't delve far into incident management in February, merely titillating our audiences (including you, dear blog reader) with brief tasters of what's to come, sweet little aperitifs to whet the appetite.  

Q: is an undetected incident an incident?  

A: yes. The fact that it hasn't (yet) been detected may itself constitute a further incident, especially if it turns out to be serious and late/non-detection makes matters even worse.

Jan 8, 2019

NBlog Jan 8 - audit questions (braindump)


"What questions should an auditor ask?" is an FAQ that's tricky to answer since "It depends" is true but unhelpful.  

To illustrate my point, here are some typical audit questions or inquiries:
  • What do you do in the area of X
  • Tell me about X
  • Show me the policies and procedures relating to X
  • Show me the documentation arising from or relating to X
  • Show me the X system from the perspectives of a user, manager and administrator
  • Who are the users, managers and admins for X
  • Who else can access or interact or change X
  • Who supports X and how good are they
  • Show me what happens if X
  • What might happen if X
  • What else might cause X
  • Who might benefit or be harmed if X
  • What else might happen, or has ever happened, after X
  • Show me how X works
  • Show me what’s broken with X
  • Show me how to break X
  • What stops X from breaking
  • Explain the controls relating to X
  • What are the most important controls relating to X, and why is that
  • Talk me through your training in X
  • Does X matter
  • In the grand scheme of things, is X important relative to, say, Y and Z
  • Is X an issue for the business, or could it be
  • Could X become an issue for the business if Y
  • Under what circumstances might X be a major problem
  • When might X be most problematic, and why
  • How big is X - how wide, how heavy, how numerous, how often ... 
  • Is X right, in your opinion
  • Is X sufficient and appropriate, in your opinion
  • What else can you tell me about X
  • Talk me through X
  • Pretend I am clueless: how would you explain X
  • What causes X
  • What are the drivers for X
  • What are the objectives and constraints relating to X
  • What are the obligations, requirements and goals for X
  • What should or must X not do
  • What has X achieved to date
  • What could or should X have achieved to date
  • What led to the situation involving X
  • What’s the best/worst thing about X
  • What’s the most/least successful or effective thing within, about or without X
  • Walk or talk me through the information/business risks relating to X
  • What are X’s strengths and weaknesses, opportunities and threats
  • What are the most concerning vulnerabilities in X
  • Who or what might threaten X
  • How many changes have been made in X
  • Why and how is X changed
  • What is the most important thing about X
  • What is the most valuable information in X
  • What is the most voluminous information in X
  • How accurate is X …
  • How complete is X …
  • How up-to-date is X …
    • … and how do you know that (show me)
  • Under exceptional or emergency conditions, what are the workarounds for X
  • Over the past X months/years, how many Ys have happened … how and why
  • If X was compromised in some way, or failed, or didn’t perform as expected etc., what would/might happen
  • Who might benefit from or be harmed by X 
  • What has happened in the past when X failed, or didn’t perform as expected etc.
  • Why hasn’t X been addressed already
  • Why didn’t previous efforts fix X
  • Why does X keep coming up
  • What might be done to improve X
  • What have you personally tried to address X
  • What about your team, department or business unit: what have they done about X
  • If you were the Chief Exec, Managing Director or god, what would you do about X
  • Have there been any incidents caused by or involving X and how serious were they
  • What was done in response – what changed and why
  • Who was involved in the incidents
  • Who knew about the incidents
  • How would we cope without X
  • If X was to be replaced, what would be on your wishlist for the replacement
  • Who designed/built/tested/approved/owns X
  • What is X made of: what are the components, platforms, prerequisites etc.
  • What versions of X are in use
  • Show me the configuration parameters for X
  • Show me the logs, alarms and alerts for X
  • What does X depend on
  • What depends on X
  • If X was preceded by W or followed by Y, what would happen to Z
  • Who told you to do ... and why do you think they did that
  • How could X be done more efficiently/effectively
  • What would be the likely or possible consequences of X
  • What would happen if X wasn’t done at all, or not properly
  • Can I have a read-only account on system X to conduct some enquiries
  • Can I have a full-access account on test system X to do some audit tests
  • Can I see your test plans, cases, data and  results
  • Can someone please restore the X backup from last Tuesday 
  • Please retrieve tape X from the store, show me the label and lend me a test system on which I can explore the data content
  • If X was so inclined, how could he/she cause chaos, or benefit from his/her access, or commit fraud/theft, or otherwise exploit things
  • If someone was utterly determined to exploit, compromise or harm X, highly capable and well resourced, what might happen, and how might we prevent them succeeding
  • If someone did exploit X, how might they cover their tracks and hide their shenanigans
  • If X had been exploited, how would we find out about it
  • How can you prove to me that X is working properly
  • Would you say X is top quality or perfect, and if not why not
  • What else is relevant to X
  • What has happened recently in X
  • What else is going on now in X
  • What are you thinking about or planning for the mid to long term in relation to X
  • How could X be linked or integrated with other things
  • Are there any other business processes, links, network connections, data sources etc. relating to X
  • Who else should I contact about X
  • Who else ought to know about the issues with X
  • A moment ago you/someone else told me about X: so what about Y
  • I heard a rumour that Y might be a concern: what can you tell me about Y
  • If you were me, what aspects of X would concern you the most
  • If you were me, what else would you ask, explore or conclude about X
  • What is odd or stands out about X
  • Is X good practice
  • What is it about X that makes you most uncomfortable
  • What is it about this audit that makes you most uncomfortable
  • What is it about me that makes you most uncomfortable
  • What is it about this situation that makes you most uncomfortable
  • What is it about you that makes me most uncomfortable
  • Is there anything else you’d like to say
I could go on all day but that is more than enough already and I really ought to be earning a crust! If I had more time, stronger coffee and thought it would help, I might try sorting and structuring that braindump ... but in many ways it would be better still if you did so, considering and revising the list to suit your purposes if you are planning an audit. 

Alternatively, think about the questions you should avoid or not ask. Are there any difficult areas? What does that tell you?

It's one of those situations where the journey trumps the destination. Developing a set of audit concerns and questions is a creative process. It's fun.

I’m deliberately not specifying “X” because that is the vital context. The best way I know of determining X and the nature of the questions/enquiries arising is risk analysis. The auditor looks at the subject area, considers the possibilities, evaluates the risks and picks out the ones that are of most concern, does the research and fieldwork, examines the findings … and re-evaluates the situation (possibly leading to further investigation – it’s an iterative process, hence all the wiggly arrows and loops on the process diagram). 

Auditing is not simply a case of picking up and completing a questionnaire or checklist, although that might be part of the audit preparation. Competent, experienced auditors feed on lists, books, standards and Google as inputs and thought-provokers for the audit work, not definitive or restrictive descriptions of what to do. On top of all that, the stuff they discover often prompts or leads to further enquiries, sometimes revealing additional issues or risks or concerns almost by accident. The real trick to auditing is to go in with eyes, ears and minds wide open – curious, observant, na├»ve, doubtful (perhaps even cynical) yet willing to consider and maybe be persuaded.

[For yet more Hinson tips along these lines, try the computer audit FAQ.]

Jan 1, 2019

NBlog Jan 1 - putting resilience on the agenda

Resilience means bending not breaking, surviving issues or incidents that might otherwise be disastrous. Resilient things aren’t necessary invulnerable but they are definitely not fragile. Under extreme stress, their performance degrades gracefully but, mostly, they just keep on going ... like we do.  

In the 15 years since we launched the NoticeBored service, we've survived emigrating to New Zealand, the usual ups and downs of business plus the Global Financial Crisis. Lately we're seeing an upturn in sales as customers release the strangle-holds on their awareness and training budgets ... and invest in becoming more resilient to survive future challenges.  

The following snippet from the Financial Conduct Authority's new report "Cyber and Technology Resilience" caught my beady eye in the course of researching and writing January's materials:










The NoticeBored service supplies top-quality creative content for security awareness and training programs on a market-leading range of topics, with parallel streams of material aimed at the workforce in general plus managers and professionals specifically. Getting all three audiences onto the same page and encouraging interaction is a key part of socializing information risk and security, promoting the security culture.

'Resilience' is the 189th NoticeBored awareness and training module and the 67th topic in our bulging portfolioIf your security awareness and training program is limited to basic topics such as viruses and passwords, with a loose assortment of materials forming an unsightly heap, it's no wonder your employees are bored stiff. A dull and uninspiring program achieves little value, essentially a waste of money. Furthermore if it only addresses "end users" and "cybersecurity" i.e. just IT security, again you're missing a trick. Resilience, for example, has profound implications for the entire business and beyond, with supply chains and stakeholders to consider. Resilient computing is just part of it.

For a highly cost-effective approach, read about January's NoticeBored security awareness and training module on resilience and get in touch to subscribe. I'm not just talking about the 'disappointing' 10% of financial companies apparently lacking an awareness program (!) but all organizations, including those of you who already get it and have something running. As we plummet towards 2020, seize the opportunity and ear-mark a tiny sliver of budget to energize your organization's approach to security awareness and training with NoticeBored. We're keen to help you toughen-up, making 2019 a happy, resilient and secure year ahead. Make security awareness your new year's resolution.

Happy new year!

Dec 29, 2018

NBlog Dec 29 - awareness case study

The drone incident at Gatwick airport makes a good backdrop for a security awareness case study discussion around resilience.  

It's a big story globally, all over the news, hence most participants will have heard something about it. Even if a few haven't, the situation is simple enough for them to pick up on and engage in the conversation.

The awareness objective is for participants to draw out, consider, discuss and learn about the information risk, information or cybersecurity aspects, in particular the resilience angle ... but actually, that's just part of it. It would be better if participants were able to generalize from the Gatwick drone incident, seeing parallels in their own lives (at work and at home) and ultimately respond appropriately. The response we're after involves workers changing their attitudes, decisions and behaviors e.g.:
  • Considering society's dependence on various activities, services, facilities, technologies etc., as well as the organization and their own dependencies, and ideally reducing dependence on vulnerable aspects;
  • Becoming more resilient i.e. stronger, more willing and able to cope with incidents and challenges of all kinds;
  • Identifying and reacting appropriately to various circumstances that are short on resilience e.g. avoiding placing undue reliance on relatively fragile or unreliable systems, comms, processes and relationships;
  • Perhaps even actively exploiting situations, gaining business advantage by persuading competitors or adversaries to rely unduly on their resilience arrangements (!).
Assorted journalists, authorities and bloggers are keen to point out that the Gatwick drone incident is 'a wake-up call' and that 'something must be done'. Most imply that they are concerned about other airports and, fair enough, the lessons are crystal clear in that context ... but we have deliberately expanded across other areas where resilience is just as important, along with risk, security, safety, reliability, technology and more.

That's a lot of awareness mileage from a public news story but, as with the awareness challenge, putting the concept into practice is where we earn our trivial fees!


Visit the website or contact me to find out more about the NoticeBored service, and to quote you a trivial price - so low in fact that avoiding a single relatively minor incident should more than justify the annual running costs of your entire security awareness and training program. 

By the way, we set our sights much higher than that!