Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Feb 14, 2019

NBlog Feb 14 - online lovers, offline scammers





Social engineering scams are all the rage, a point worth noting today of all days.

A Kiwi farmer literally lost the farm to a scammer he met and fell for online. 

Reading the news report, this was evidently a classic advance fee fraud or 419 scam that cost him a stunning $1.25m. 

This is not the first time I've heard about victims being drawn-in by the scammers to the extent that they refuse to accept that they have been duped when it is pointed out to them. There's probably something in the biology of our brains that leads us astray - some sort of emotional hijack going on, bypassing the normal rational thought processes.

On a more positive note, the risks associated with online dating are reasonably well known and relatively straightforward to counter. And old-school offline dating is not risk free either. 

Relationships generally are a minefield ... but tread carefully and amazing things can happen. Be careful, be lucky.

Feb 9, 2019

NBlog Feb 9 - inform and motivate

The malware encyclopedia destined for inclusion in our next awareness module is coming along nicely ...

































It's interesting to research and fun to write in an informative but more informal style than the glossary, with several decidedly tongue-in-cheek entries so far and a few graphics to break up the text.

I guess it will end up at about 20 pages, longer than usual for a general security awareness briefing but 100% on-topic. There's a lot to say about malware, being such a complex and constantly evolving threat. I hope the relaxed style draws readers in and makes them think more carefully about what they are doing without being too do-goody, too finger-wagging. Prompting changes of attitudes and behaviors is our aim, not just lecturing the troops. Awareness and training is pointless if it's not sufficiently motivational.

Feb 8, 2019

NBlog Feb 8 - creative awareness



We're slaving away on the 'malware update' security awareness and training module for March. Malware is such a common and widespread issue that we cover it every year, making it potentially tedious and dull. People soon get bored by the same old notices - not exactly ideal for awareness and training purposes. 

Simply tarting-up and repackaging malware awareness materials we have delivered previously would be relatively easy for us but is not sufficient. Our subscribers deserve more! Aside from needing to reflect today's malware threats and current security approaches, we must find new angles and inject new content each time in order to spark imaginations and engage the audiences, again and again. 

Luckily (in a way), malware is a writhing vipers' pit, constantly morphing as the VXers and antivirus pro's do battle on a daily basis. So what's new this year?

The rapid evolution of malware risks is a story worth telling, but how can we actually do that in practice? We favor a strongly visual approach using an animated sequence of Probability Impact Graphs to explain, year-by-year, how specific malware risks have emerged, grown and then mostly faded away as the world gets on top of them. 

It would be great to have the foresight to predict next year's malware PIG, projecting forward from to today's but that's tricky, even for malware experts (which I'm not). The best I can do is pick out a few trends that illustrate the kinds of things that we might be facing over the remainder of 2019 ... and perhaps make the point that uncertainty is the very essence of 'risk'. If we knew exactly what to expect, we could of course prepare for it and better yet avoid or prevent it happening: we don't, hence we can't, hence we need to be ready for anything, which point links neatly back to January's awareness topic of resilience and business continuity, and forward to April's on incident detection. 

And so our cunning strategic plan continues to bear fruit. Although NoticeBored covers different topics every month, they are all part of information security, all in and around the same core area. The approach is quite deliberate: we're poking at the same blob from different directions, exposing and exploring different aspects in order to help our audiences appreciate the whole thing, whilst at the same time avoiding information overload (trying to cover it all at once) and boredom (the blinkered view). Sometimes we take a step back for more of an overview, occasionally we dive deeper into some particular aspect that catches our attention and hopefully intrigues our customers, especially those with relatively mature awareness and training programs. Advanced topics tend to be quite narrow in scope, but even with those we make a conscious effort to link them into the broader context. 

Key words such as 'information', 'risk', 'security', 'control', 'governance' and 'compliance' inevitably crop up in almost every module. Talking of which, we've come up with a new style of awareness material for March, a malware encyclopedia derived from the NoticeBored information security glossary. The full glossary is a substantial piece of work, over 300 pages long, a whole book's worth of content. It's a fantastic reference source for professionals and specialists working in the field, so good in fact that we use it ourselves since remembering all the fine details on more than 2,000 information security terms is beyond us.

I'll have more to say about the encyclopedia tomorrow. For now, must press on, lots to do.

Feb 7, 2019

NBlog Feb 7 - risks and opportunities defined


In the ISO27k context, 'risks and opportunities' has at least four meanings or interpretations:
  1. Information risks and information opportunities are the possibilities of information being exploited in a negative and positive sense, respectively. The negative sense is the normal/default meaning of risk in our field, in other words the possibility of harmful consequences arising from incidents involving information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. This blog piece is an example of positively exploiting information: I am deliberately sharing information in order to inform, stimulate and educate people, for the benefit of the wider ISO27k user community (at least, that's my aim!). 
  2. Business risks and business opportunities arise from the use of information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. to harm or further the organization’s business objectives, respectively. The kind of manipulative social engineering known as ‘marketing’ and ‘advertising’ is an example of the beneficial use of information for business purposes. The need for the organization to address its information-related compliance obligations is an example that could be a risk (e.g. being caught out and penalized for noncompliance) or an opportunity (e.g. not being caught and dodging the penalties) depending on circumstances.
  3. The ISMS itself is subject to risks and opportunities. Risks here include sub-optimal approaches and failure to gain sufficient support from management, leading to lack of resources and insufficient implementation, severely curtailing the capability and effectiveness of the ISMS, meaning that information risks are greater and information opportunities are lower than would otherwise have been achieved. Opportunities include fostering a corporate security culture through the ISMS leading to strong and growing support for information risk management, information security, information exploitation and more.
  4. There are further risks and opportunities in a more general sense. The possibility of gaining an ISO/IEC 27001 compliance certificate that will enhance organization’s reputation and lead to more business, along with the increased competence and capabilities arising from having a compliant ISMS, is an example of an opportunity that spans the 3 perspectives above. ‘Opportunities for improvement’ involve possible changes to the ISMS, the information security policies and procedures, other controls, security metrics etc. in order to make the ISMS work better, where ‘work better’ is highly context-dependent. This is the concept of continuous improvement, gradual evolution, maturity, proactive governance and systematic management of any management system. Risks here involve anything that might prevent or slow down the ongoing adaptation and maturity processes, for example if the ISMS metrics are so poor (e.g. irrelevant, unconvincing, badly conceived and designed, or the measurement results are so utterly disappointing) that management loses confidence in the ISMS and decides on a different approach, or simply gives up on the whole thing as a bad job. Again, the opportunities go beyond the ISMS to include the business, its information, its objectives and constraints etc.
Unfortunately in my opinion, ISO/IEC JTC1/SC27 utterly confused interpretation (1) with (3) in 27001 clause 6. As I understand it, the ISO boilerplate text for all management systems standards concerns sense (3), specifically. Clause 6 should therefore have focused on the planning required by an organization to ensure that its ISMS meets its needs both initially and in perpetuity, gradually integrating the ISMS as a routine, integral and beneficial part of the organization’s overall governance and management arrangements. Instead, ‘27001 clause 6 babbles on about information security objectives rather than the governance, management and planning needed to define and satisfy the organization’s objectives for its ISMS. The committee lost the plot - at least, that’s what I think, as a member of SC27: others probably disagree! 

Feb 1, 2019

NBlog Feb 1st - awareness module on mistakes

Security awareness and training programs are primarily concerned with incidents involving deliberate or intentional threats such as hackers and malware. In February, we take a look at mistakes, errors, accidents and other situations that inadvertently cause problems with the integrity of information, such as:
  • Typos;
  • Using inaccurate data, often without realizing it;
  • Having to make decisions based on incomplete and/or out-of-date information;
  • Mistakes when designing, developing, using and administering IT systems, including those that create or expose vulnerabilities to further incidents (such as hacks and malware);
  • Misunderstandings, untrustworthiness, unreliability etc. harming the organization’s reputation and its business relationships.
Mistakes are far more numerous than hacks and malware infections but thankfully most are trivial or inconsequential, and many are spotted and corrected before any damage is done. However, serious incidents involving inaccurate or incomplete information do occur occasionally, reminding us (after the fact!) to be more careful about what we are doing. 
The NoticeBored awareness and training materials take a more proactive angle, encouraging workers to take more care with information especially when handling (providing, communicating, processing or using) particularly important business- or safety-critical information – when the information risks are greater.

Learning objectives

  • Introduces the topic, describing the context and relevance of 'mistakes' to information risk and security;
  • Expands on the associated information risks and typical information security controls to cut down on mistakes involving information;
  • Offers straightforward information and pragmatic advice, motivating people to think - and most of all act – so as to reduce the number and severity of mistakes involving information;
  • Fosters a corporate culture of error-intolerance through greater awareness, accountability and a focus on information quality and integrity.
NoticeBored subscribers are encouraged to customize the content supplied, adapting both the look-and-feel (the logo, style, formatting etc.) to suit their awareness program’s branding, and the content to fit their information risk, security and business situations. Subscribers are free to incorporate additional content from other sources, or to cut-and-paste selections from the NoticeBored materials into staff newsletters, internal company magazines, management reports etc. making the best possible use of the awareness content supplied.

So what about your learning objectives in relation to mistakes, errors etc. Does your organization have persistent problems in this area? Is this an issue that deserves greater attention from staff and management, perhaps in one or more departments, sites/business units or teams? Have mistakes with information ever led to significant incidents? What have you actually done to address the risk?

HINT: Don't be surprised if the same methods lead to the same results. "The successful man will profit from his mistakes ... and try again in a different way" [Dale Carnegie]NoticeBored is different

Jan 31, 2019

NBlog Jan 31 - why so many IT mistakes?


Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.  

This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings. 

No shortage there!

Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un.

The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials, our cunning plan is that it should emerge from the content and leave the audience pondering, hopefully chatting about it. Is IT more complex than other fields, making it harder to get right? Are IT pro's unusually inept, slapdash and careless? What are the real root causes underlying IT's poor record? Does the blame lay elsewhere? Or is the assertion that IT has a poor record false, a mistake? 

The point of this ramble is that we've teased out something interesting and thought-provoking, directly relevant to the topic, contentious and hence stimulating. In awareness terms, that's a big win. Our job is nearly done. Just a few short hours to go now before the module is packaged and delivered, and the fun begins for our customers. 

Jan 28, 2019

NBlog Jan 28 - creative technical writing


"On Writing and Reviewing ..." is a fairly lengthy piece written for EDPACS (the EDP Audit, Control, and Security Newsletter) by Endre Bihari. 

Endre discusses the creative process of writing and reviewing articles, academic papers in particular although the same principles apply more widely - security awareness briefings, for example, or training course notes. Articles for industry journals too. Even scripts for webcasts and seminars etc. Perhaps even blogs.

Although Endre's style is verbose and the language quite complex in places, I find his succinct bullet point advice to reviewers more accessible, for example on the conclusion section he recommends:
  • Are there surprises? Is new material produced?
  • How do the results the writer arrived at tie back to the purpose of the paper?
  • Is there a logical flow from the body of the paper to the conclusion?
  • What are the implications for further study and practice?
  • Are there limitations in the paper the reader might want to investigate? Are they pointed at sufficiently?
  • Does the writing feel “finished” at the end of the conclusion?
  • Is the reader engaged until the end?
  • How does the writer prompt the reader to continue the creative process?
I particularly like the way Endre emphasizes the creative side of communicating effectively. Even formal academic papers can be treated as creative writing. In fact, most would benefit from a more approachable, readable style. 

Interestingly, Endre points out that the author, reviewer and reader are key parties to the communication, with a brief mention of the editor responsible for managing the overall creative process. Good point!

Had I been asked to review Endre's paper, I might have suggested consolidating the bullet-points into a checklist, perhaps as an appendix or a distinct version of his paper. Outside of academia, the world is increasingly operating on Internet time due, largely, to the tsunami of information assaulting us all. Some of us want to get straight to the point, first, then if our interest has been piqued, perhaps explore in more detail from there which suggests the idea of layering the writing, more succinct and direct at first with successive layers expanding on the depth. [Endre does discuss the abstract (or summary, executive summary, precis, outline or whatever but I'm talking here about layering the entire article.]

Another suggestion I'd have made is to incorporate diagrams and figures, in other words using graphic images to supplement or replace the words. A key reason is that many of us 'think in pictures': we find it easier to grasp concepts that are literally drawn out for us rather than (just) written about. There is an art to designing and producing good graphics, though, requiring a set of competencies or aptitudes distinct from writing. 

Graphics are especially beneficial for technical documentation including security awareness materials, such as the NoticeBored seminar presentations and accompanying briefing papers. We incorporate a lot of graphics such as:
  • Screen-shots showing web pages or application screens such as security configuration options;
  • Graphs - pie-charts, bar-charts, line-charts, spider or radar diagrams etc. depending on the nature of the data;
  • Mind-maps separating the topic into key areas, sometimes pointing out key aspects, conceptual links and common factors;
  • Process flow charts;
  • Informational and motivational messages with eye-catching photographic images;
  • Conceptual diagrams, often mistakenly called 'models' [the models are what the diagrams attempt to portray: the diagrams are simply representational];
  • Other diagrams and images, sometimes annotated and often presented carefully to emphasize certain aspects.
Also, by the way, we use buttons, text boxes, colors and various other graphic devices to pep-up our pieces, for example turning plain (= dull!) bullet point lists into structured figures like this slide plucked from next month's management-level security awareness and training seminar on "Mistakes":

So, depending on its intended purpose and audience, a graphical version of Endre's paper might have been better for some readers, supplementing the published version. At least, that's my take on it, as a reviewer and tech author by day. YMMV


Jan 27, 2019

NBlog Jan 27 - streaming awareness content

As the materials fall into place for "Mistakes", our next security awareness module, it's interesting to see how the three content streams have diverged:

  • For workers in general, the materials emphasize making efforts to avoid or at least reduce the number of mistakes involving information such as spotting and self-correcting typos and other simple errors.
  • For managers, there are strategic, governance and information risk management aspects to this topic, with policies and metrics etc.
  • For professionals and specialists, error-trapping, error-correction and similar controls are of particular interest.
The 'workers' audience includes the other two, since managers and pro's also work (quite hard, usually!), while professional/specialist managers (such as Information Risk and Security Managers) belong to all three audiences. In other words, according to someone's position or role in the organization, there are several potentially relevant aspects to the topic.

That's what we mean by 'streaming'. It's not (just) about delivering content via streaming media: the audiences matter.

Jan 25, 2019

NBlog Jan 25 - cyber risks in context

The World Economic Forum's latest Global Risks Report includes the following Probability Impact Graphic (yellow highlighting added):



So "cyber-attacks" are ranked in the the high-risk zone similar to "natural disasters", while "data fraud or theft" and "critical information infrastructure breakdown" are close-by. I find that quite remarkable: according to the survey, people are almost as concerned about information or IT security failures as they are about the increasingly extreme 'weather bombs' and natural disasters precipitated by climate change.   

The report also includes a forward-looking view of changing risks, including this level-headed assessment of the potential impact of quantum computing on present-day cryptography:
"When the huge resources being devoted to quantum research lead to large-scale quantum computing, many of the tools that form the basis of current digital cryptography will be rendered obsolete. Public key algorithms, in particular, will be effortlessly crackable. Quantum also promises new modes of encryption, but by the time new protections have been put in place many secrets may already have been lost to prying criminals, states and competitors. A collapse of cryptography would take with it much of the scaffolding of digital life. These technologies are at the root of online authentication, trust and even personal identity. They keep secrets—from sensitive personal information to confidential corporate and state data—safe. And they keep fundamental services running, from email communication to banking and commerce. If all this breaks down, the disruption and the cost could be massive. As the prospect of quantum code-breaking looms closer, a transition to new alternatives— such as lattice-based and hash-based cryptography—will gather pace. Some may even revert to low-tech solutions, taking sensitive information offline and relying on in-person exchanges. But historical data will be vulnerable too. If I steal your conventionally encrypted data now, I can bide my time until quantum advances help me to access it, regardless of any stronger precautions you subsequently put in place."
I distinctly remember raising this in a bank's risk workshop thirteen years ago. At the time, the risk was considered high impact but low probability: as the technology advances, the probability is increasing while, at the same time, so is the potential impact since we increasingly depend on cryptography. I wonder if the bank did anything about it, or merely dismissed it as 'Just another paranoid consultant's ramblings'?

Jan 23, 2019

NBlog Jan 23 - infosec policies rarer than breaches

I'm in shock.  While studying a security survey report, my eye was caught by the title page:


Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies". That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??).

Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report (e.g. a data, analytical or printing error), or in the associated questions (e.g. the questions may have been badly phrased) or in my understanding of the finding as presented. In my limited first-hand experience with rather less than ~2,000 UK organizations, most have information security-related policies in place today ... but perhaps that's exactly the point: they may have 'infosec policies' but not 'cybersec policies' as such. Were the survey questions in this area worded too explicitly or interpreted too precisely? Was 'cyber security' even defined for respondents, or 'policy' for that matter? Or is it that, being an infosec professional, I'm more likely to interact with organizations that have a clue about infosec, hence my sample is biased?

Thankfully, a little digging led me to the excellent technical annex with very useful  details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession. 

Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access." 
Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in the NoticeBored glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.

"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!

Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.

I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?

Blog comments are open. Feedback is always welcome.