Welcome to NBlog, the NoticeBored blog

Bored of the same old same old? Here's something a bit different.

Mar 21, 2019

NBlog March 21 - overcoming inertia

Yesterday I wrote about a five-part strategy to increase the number and quality of incident reports. The fifth part involves making both staff and management vigilant or alert for trouble.

There is an obvious link here to the ongoing security awareness and training activities, pointing out and explaining the wide variety of threats that people should know about. Thanks to this month's NoticeBored content on malware, for instance, workers should be in a better position to spot suspicious emails and other situations in which they are at high risk of picking up malware infections. Furthermore, they ought to know what to do when they spot threats - avoiding risky activities (e.g. not opening dodgy email attachments or links) and reporting them.

In April we have the opportunity to take that a step further. What could or should the organization do to empower (facilitate and encourage) alert workers to report the malware threats and other concerns that they spot? What's the best way to overcome the natural reluctance to speak-up, making 'Keep calm and carry on' seem like the easy option?

There's more to that issue than meets the eye ... making it an excellent open-ended poser to raise and discuss as a group during April's awareness seminars. It brings up issues such as:
  • Trust and respect - reporters believing that their incident reports will be taken seriously and in good faith, and recipients trusting that the reporters have a genuine basis for reporting;
  • Reasonable expectations concerning the activities to investigate and address reported incidents, following established processes;
  • Barriers - the need to overcome inertia and actively encourage, not just facilitate, incident reporting.
In the speaker notes for April's management seminar and in the accompanying management briefing, we will be raising a few issues along those lines but our aim is to prompt or kick-start the discussion in the particular context of a specific customer organization, not to spoon-feed them with the whole nine yards. Each of our lovely customers is unique in terms of their business situations - their industries, locations, cultures, maturity levels, objectives, risks and so on. They got wherever they are today by their own special route, and where they are heading tomorrow is down to them. We believe incident reporting is probably a valuable part of their journey but exactly what part it plays we can't say: they need to figure that out for themselves.

Providing valuable, informative and stimulating information security awareness and training content for a wide range of customers is an 'interesting' challenge. It's the reason we deliver fully-customizable content (mostly MS Office files that customers can adapt to suit their circumstances) and try hard not to impose solutions (e.g. our awareness posters are designed to intrigue rather than tell). That said, information risk and security is clearly our passion and we make no bones about it. We are evangelical about this stuff, keen to spread the word and fire people up. It's what we do.

Mar 20, 2019

NBlog March 20 - a big win for security awareness

Working on the management seminar slide-deck over the past couple of days, we've developed and documented a coherent five-part strategy for improving both the speed and the accuracy of incident reporting.

The strategy mostly involves changing the motivations and behaviors of both staff and management, possibly with some IT systems and metrics changes where appropriate to support the objectives.

Elaborating on the background and those objectives explains what the strategy is intended to achieve: the slides and notes justify the approach in business terms, in effect outlining a business case. It's generic, of course, but providing it in the form of a management seminar plus supporting notes and briefings encourages NoticeBored subscribers to engage their managers in a discussion around the proposal, hopefully leading to consensus and agreement to proceed, one way or another.

The nice thing about this is that it can't really fail: the very act of management considering and discussing the proposal itself drives the improvements we are suggesting in a general manner, even if the decision is made not to proceed with the specific changes proposed. If the response from management is more favorable, the outcome will no doubt be some version of the strategy customized to suit the specific organizational context and needs, plus management's commitment to see it through.

Either way, that's a win for security awareness!

Mar 17, 2019

NBlog March 17 - cat-skinning

Incident reporting is a key objective of April's NoticeBored module. More specifically, we'd like workers to report information security matters promptly. 

So how might we achieve that through the awareness and training materials? Possible approaches include:
  1. Tell them to report incidents. Instruct them. Give them a direct order.
  2. Warn them about not doing it. Perhaps threaten some form of penalty if they don't.
  3. Convince them that it is in the organization's interests for workers to report stuff. Persuade them of the value.
  4. Convince workers that it is in their own best interest to report stuff. Persuade them.
  5. Explain the reporting requirement (e.g. what kinds of things should they report, and how?) and encourage them to do so.
  6. Make reporting incidents 'the easy option'.
  7. Reward people for reporting incidents.
  8. Something else? Trick them? Goad them? Follow up on those who did not report stuff promptly, asking about their reasons?
Having considered all of them, we'll combine a selection of these approaches in the awareness content and the train-the-trainer guide.

In the staff seminar and staff briefing, for instance, the line we're taking is to describe everyday situations where reporting incidents directly benefits the reporter (approach #4 in the list). Having seeded the idea in the personal context, we'll make the connection to the business context (#3) and expand a little on what ought to be reported (#5) ... and that's pretty much it for the general audience. 

For managers, there is mileage in #1 (policies and procedures) and #7 (an incentive scheme?) ... and #8 in the sense that we are only suggesting approaches, leaving NoticeBored subscribers to interpret or adapt them as they wish. Even #2 might be necessary in some organizations, although it is rather negative compared to the alternatives. 

For professionals, #6 hints at designing reporting systems and processes for ease of use, encouraging people to report stuff ... and, where appropriate, automatic reporting if specific criteria are met, which takes the awareness materials into another potentially interesting area. If the professionals are prompted at least to think about the issue, our job is done

Mandatory reporting of incidents to third parties is a distinct but important issue, especially for management. The privacy breach reporting deadline under GDPR (a topical example) is a very tough challenge for some organizations, requiring substantial changes in their approach to internal incident reporting, escalation and external reporting, and more generally the attitudes of those involved, making this a cultural issue. 

Mar 16, 2019

NBlog March 16 - terrorism in NZ

Last evening I turned on the TV to veg-out at the end of a busy week. Instead of my favourite NZ comedy quiz show, both main national channels were looping endlessly with news of the terrorist incident in Christchurch. Well I say 'news': mostly it was lame interviews with people tenuously connected to Christchurch or the Muslim community in NZ, and fumbling interviewers seemingly trying to fill air-time. Ticker-tape banners across the bottom of the screen, ALL IN CAPS, kept repeating the same few messages about the PM mentioning terrorism, yet neglected to say what had actually happened. I managed to piece together a sketchy outline of the incident before eventually giving up. Too much effort for a Friday night.

I gather around 50 people died yesterday in the event. Also yesterday, about 90 other people died, and another ~90 will die today, and every day on average according to the official government statistics:  



This year, ~6,000 Kiwis will die of heart disease, and between 300 and 400 of us will die on the roads.  

Against that backdrop, deaths due to terrorism do not even feature in the stats, so here I'll give you a very rough idea of where we stand:

Don't get me wrong, it is tragic that ~50 people died in the incident yesterday and of course I regret that anyone died. But get real. The media have, as usual, blown it out of all proportion, and turned a relatively minor incident into an enormous drop-everything disaster. 

So what it is about 'terrorism' that sends the media - and it seems the entire population - into such a frenzy? Why is 'terrorism' so newsworthy? Why is it reported so badly? Who benefits from scaring the general population in this way?

Oh, hang on, the clue is in the name. Terrorism only works if we are terrified.

This looks to me like yet another example of 'outrage', a fascinating social phenomenon involving an emotional rather than rational response, amplified by the news and social media with positive feedback leading to a runaway situation. Here I am providing a little negative feedback to redress the balance but I'm sure I will be criticised for having the temerity to even express this. And that, to me, is terrorism of a different kind - information terrorism.

Mar 14, 2019

NBlog March 14 - carving up the policy pie


Today being Pi day 2019, think of the organization's suite of policies as a delicious pie with numerous ingredients, maybe a crunchy crust and toppings. Whether it's an award winning blue cheese and steak pie from my local baker, or a pecan pie with whipped cream and honey, the issue I'm circling around is how to slice up the pie. Are we going for symmetric segments, chords or layers? OK, enough of the pi-puns already, today I'm heading off at a tangent, prompted by an ongoing discussion around policies on the ISO27k Forum - specifically a thread about policy compliance.

Last month I blogged about policy management. Today I'll explore the policy management process and governance in more depth in the context of information risk and security or cybersecurity if you will.

In my experience, managers who are reluctant or unable to understand the [scary cyber] policy content stick to the bits they can do i.e. the formalities of 'policy approval' ... and that's about it. They leave the experts to write the guts of the policy, and even take their lead on whether there ought to be a policy at all, plus what the actual policy position should be. I rather suspect some don't even properly read and understand the policies they are asked to approve, not that they'd ever admit it!

The experts, in turn, naturally concentrate on the bits they are most comfortable with, namely writing that [cyber] content. Competent and experienced policy authors are well aware of the potential implications of [cyber] policies in their areas of specialty, so a lot of their effort goes into the fine details, crafting the specific wording to achieve [their view of] the intended effect with the least amount of collateral damage: they are busy down in the weeds of the standards and procedures, thinking especially about implementation issues and practicalities rather than true policies. For some of them anyway, everything else is dismissed as 'mere formalities'. 

Incompetent and inexperienced policy authors - well, they just kind of have a go at it in the hope that either it's good enough or maybe someone else will sort it out. Mostly they don't even appreciate the issues I'm discussing. Those dreadful policies written in pseudo-legal language are a bit of a giveaway, plus the ones that are literally unworkable, half-baked, sometimes unreadable and usually unhelpful. Occasionally worse than useless. 

Many experts and managers address each policy independently as if it exists in a vacuum, potentially leading to serious issues down the road such as direct conflicts with other policies and directives, perhaps even laws, regulations, strategies, contractual commitments, statements of intent, corporate values and so forth. Pity the poor worker instructed to comply with everything! The underlying issue is that the policies, procedures, directives, laws etc. form a complex and dynamic multidimensional matrix including but stretching far beyond the specific subject area of any one: they should all support and complement each other with few overlaps and no conflicts or gaps but good luck to anyone trying to achieve that in practice! Simply locating and mapping them all would be a job in itself, let alone consistently managing the entire suite as a coherent whole. 

So, in practice, organizations normally structure their policies into clusters around business departments such as finance, IT and HR. If we're lucky, the policies use templates making them are reasonably consistent in style and tone, look and feel, across all areas, and hopefully consistent in content within each area ... but that enterprise-wide consistency and integration of the entire suite is almost as rare as trustworthy politicians. 

That, to me, smells very much like a governance issue. Where is the high-level oversight, vision and direction? What kind of pie is it and how should it be sliced? Should 'cyber' policies (whatever that means) be part of the IT domain, or risk, or [information or IT] security, or assurance ... or should they form another distinct cluster? Who is going to deal with all those boundaries and interfaces, potential conflicts and overlaps? And how, in fact? 

But wait, there's more! Re the process, have you ever seen one of those, in practice - an actual, designed, documented and operational Policy Management Process? They do exist but I suspect only in mature, strongly ISO 9000-driven quality assurance cultures such as aerospace, or compliance-driven cultures such as finance, or highly bureaucratic organizations such as governments. Most organizations just seem to muddle through, essentially making things up as they go along. As auditors, we consider ourselves fortunate to find the basics such as identified policy owners and issue/approval status with a date! Refinements such as version numbers, defined review cycles, and systematic review processes, are sheer luxuries. As to proactively managing the entirety of the policy lifecycle from concept through to retirement, nah forgeddabahtit! 

Compliance is an example of something that ought to addressed in the policy management process, ideally leading to the compliance aspects being designed and then documented in the policies themselves and at implementation time being supported by associated awareness and training, metrics and activities to both enforce and reinforce compliance. Again, in practice, we're lucky if there is any real effort to 'implement' new policies: it's often an afterthought.

Finally, there's the time dimension: I just mentioned process maturity and policy lifecycle, but that's not all. The requirements and the organizational context are also dynamic. Laws, regs, contractual terms, standards and societal norms frequently change, sometimes quite sharply and dramatically (GDPR for a recent example) but usually more subtly. Statutes are relatively stable but the way they are interpreted and used in practice ('case law') evolves, especially early and late in their lifecycles - a bathtub curve. Various implementation challenges and incidents within the organization quite often lead to calls to 'update the policies and procedures', whether that's amending or drafting (seldom explicitly withdrawing or retiring failed or superseded policies!), plus there's the constant ebb and flow of new/amended policies (and strategies and objectives and ...) throughout the business - a version of the butterfly effect from chaos theory. And of course the people change. We come and go. We each have our interests and concerns, our blind spots and hot buttons. 

Bottom line: it's a mess because of those complications and dynamics. You may feel I'm over-complicating matters and yes maybe I am for the purposes of drawing attention to the issues ... but then I've been doing this stuff for decades, often stumbling across and trying to deal with similar issues in various organizations along the way. I see patterns. YMMV. 

I'm not sure these issues are even solvable but I believe that, as professionals, we could and should do better. This is the kind of thing that ISO27k could get further into, providing succinct, generic advice based on (I guess) ISO 9000 and governance practices. 

There's still more to say on this - another time. Meanwhile, I must press on with the awareness and training materials on 'spotting incidents'.

Mar 12, 2019

NBlog March 12 - pragmatic information risk management

Over the past ~three or four decades, the information risk and security management profession has moved slowly from absolute security (also known as "best practices") to relative security (aka "good practices" or "generally-accepted security") such as ISO27k.

Now as we totter into the next phase we find ourselves navigating our way through pragmatic security (aka "good enough"). The idea, in a nutshell, is to satisfy local information risk management requirements (mostly internal organizational/business-related, some externally imposed including social/societal norms) using a practicable, workable assortment of security controls where appropriate and necessary, plus other risk treatments including risk acceptance. 

The very notion of accepting risks is a struggle for those of us in the field with high standards of integrity and professionalism. Seeing the dangers in even the smallest chinks in our armor, we expect and often demand more. It could be argued that we are expected to push for high ideals but, in practice at some point, we have no choice but to acknowledge reality and make the best of the situation before us - or resign, which achieves little except lamely register our extreme displeasure.

Speaking personally, my strategy for backing-off the pressure and accepting "good enough" security involves Business Continuity Management: I'll endorse incomplete, flawed and (to me) shoddy information security as being "good enough" IF management is willing to pay enough attention and invest sufficiently in BCM just in case unmitigated risks eventuate. 

That little bargain with management has two nice bonuses:
  1. Determining the relative criticality of various business processes, IT systems, business units, departments, teams, relationships, projects, initiatives etc. to the organization involves understanding the business in some depth, leading to a better appreciation of the associated information risks. Provided it is done well, the Business Impact Assessment part of BCM is sheer gold: it forces management to clarify, rationalize and prioritize ... which gives me a much tighter steer on where to push harder or back off the pressure. If we all agree that situation A is more valuable or important or critical to the organization than B, then I can readily justify (both to myself and to management, the auditors and other stakeholders) mitigating the risks in situation B to a lesser extent than for A. That's relative security in a form that makes sense and works for me. It gives me the rationale to accept imperfections.
  2. BCM (as I do it!) involves investing in appropriate resilience, recovery and contingency measures. The resilience part supports information security in a very general yet valuable way: it means not compromising too far on the preventive controls, ensuring they are sufficiently robust not to fall over like dominoes at the first whiff of trouble. The recovery part similarly involves detecting and responding reasonably effectively to incidents, hence I still have the mandate to maintain those areas too. Contingency adds a further element of preparing to deal with the unexpected, including information risks that weren't even foreseen, plus those that were in fact wrongly evaluated and only partially mitigated. Contingency thinking leads to flexible arrangements such as empowerment, multi-skilling, team working and broad capability development with numerous business benefits, adding to those from security, resilience and recovery.

My personal career-survival strategy also involves passing the buck, quite deliberately and explicitly. I value the whole information ownership thing, in particular the notion that whoever has the most to lose (or indeed gain) if information risks eventuate and incidents occur should be the one to determine and allocate resources for the risk treatments required. For me, it comes back to the oft-misunderstood distinction between accountability (being held to account for decisions, actions and inactions by some authority) and responsibility (being tasked with something, making the best of available resources). If an information owner - typically a senior manager for the department or business unit that most clearly has an interest in the information - is willing to live with greater information risks than I personally would feel comfortable accepting, and hence is unwilling to invest in even stronger information security, then fine: I'll help firstly in the identification and evaluation of information risks, and secondly by squeezing the most value I can from the available resources. 

At the end of the day, if it turns out that's not enough to avoid incidents, well too bad. Sorry it all turned to custard but my hands were tied. I'm only accountable for my part in the mess. Most of the grief falls to senior management, specifically the information owners. Now, let's learn the lessons here and make sure it doesn't happen again, eh?

So that's where we are at the moment but where next? Hmm, that's something interesting to mull over while I feed the animals and get my head in gear for the work-day ahead, writing security awareness and training content on incident detection.

I'd love to hear your thoughts on where we've come from, where we are now and especially where we're heading. There's no rush though: on past performance we have, oooh, about 10 or 20 years to get to grips with pragmatic security!

Meanwhile, here are two stimulating backgrounders to read and contemplate: The Ware Report from Rand, and a very topical piece by Andrew Odlyzko.

Mar 8, 2019

NBlog March 8 - proofreading vs reading vs studying


In the course of sorting out the license formalities for a new customer, it occurred to me that there are several different ways of reading stuff:

  • Skimming or speed-reading barely gives your brain a chance to keep up with your eye as you quickly glance over or through something, getting the gist of it if you're lucky;
  • Proof-reading involves more or less ignoring the content or meaning of a piece, concentrating mostly on the spelling, grammar etc. with a keen eye for misteaks, specificaly
  • Studying is a more careful, thorough and in-depth process of reading and re-reading, contemplating the meaning, considering things and mulling-over the messages at various levels. In an academic setting, it involves considering the piece in relation to the broader field of study, taking account of concepts and considerations from other academics plus the reader's own experience that both support and counter the piece, the credibility of the author and his/her team and institution, the techniques and methods used, the implications and so forth. To an extent, it involves filling-in missing pieces, considering the things left unstated by the author and trying to fathom whether there is meaning in both the gaps and the fillings;
  • Plain reading could involve the other forms shown here, or it may refer to any of a still wider range of activities including personal variants - for example, I like to doodle while reading complex pieces in some depth, typically sketching a mind map of the key and subsidiary points to help fathom and navigate the structure. I add icons or scribble cryptic notes to myself about things that catch my beady eye, or stuff I ought to explore further, or anything surprising/counterintuitive (to me). I link related issues using lines or asterisks. I highlight important points. Sometimes I just make mental notes, and maybe blog about them when my thoughts crystallize...

On top of all that, there are many different forms of information to 'read', such as:


  • The written, typed or printed word on paper (of various kinds) and/or on screen (in various formats);
  • Diagrams, pictures and figures including those mind maps, sketches and icons I mentioned plus more formalized diagrammatic representations, mathematical graphs, graphics and infographics, conceptual diagrams or 'models', videos and animations, artistic representations etc.;
  • The spoken word - presentations, seminars, lectures, conversations and many more, often supported by written content with words and diagrams plus (just as important) body language and visual cues from the people involved and the vicinity (e.g. a formal job interview situation in a stark office is rather different to a coffee-time chin-wag in a busy cafe);
  • 'Situations' - it is possible to read situations in a much more general hand-waving sense, taking account of the broader context, history and implications, even if there are no words, diagrams or even expressed language;
  • Language styles: stilted, formal language, especially that containing obscure words, terms of art and narrowly-defined meanings, is clearly different to everyday language ... or tabloid journalism ... or songs ... or casual street chat ... or ...


... which (finally!) brings me to my point. Security awareness and training content can support any or all of the above - in fact, the NoticeBored materials do, quite deliberately. The reason is that our awareness and training content is not addressing an individual but a diverse group, a loose and mysterious collection of people in all sorts of situations. Although we identify three specific audiences (staff, management and professionals), that's really just for convenience to make sure we cover key perspectives: those are not exclusive groups (e.g. a professional manager is also 'just another employee', hence all three streams may be relevant), nor are they totally comprehensive (you, dear blog reader, are probably not yet a customer, maybe not even employed in the traditional sense, just a random person who stumbled across this piece or NBlog).

Stirring the pot still further, an individual reader may have a preferred way of reading stuff but the details will vary according to circumstances. We expect different things when reading a contract, a newspaper or a blog, and we read them differently. We might skim-read a heading on a piece and move on, or continue reading in more depth, or make a mental note to come back to it later when we have more time and are less tired and emotional. Some of us gravitate towards the index or contents listing, the headings and subheadings, the diagrams and figures, the summary ... or flick from chunk to chunk perhaps following hyperlinks ... or simply start at the very top and work our way systematically to the bitter end. 

Bottom line: it pays to consider the readers when composing and writing stuff, especially in respect of awareness content since reading is almost entirely optional. If we don't provide value and interest to our diverse audience, and on occasions evoke an emotional or visceral response as much as a change of heart or behavior, we're going nowhere. We've lost the plot.

Oh yes, the plot ... must dash: work to do on the 'detectability' security awareness materials for April.

Mar 6, 2019

NBlog March 6 - new topic: detectability

On the SecurityMetrics.org discussion form, Walt Williams posed a question about the value of 'time and distance' measures in information security, leading to someone suggesting that 'speed of response' might be a useful metric. However, it's bit tricky to define and measure: exactly when does an incident occur? What about the response? Assuming we can define them, do we time the start, the end, or some intermediate point, or perhaps even measure the ranges?

Next month in the NoticeBored security awareness and training program, we're exploring a new topic: 'incident detectability' concerning the ease and hence likelihood of detection of information security incidents. 

Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphs for assessing and comparing risks. 

However, that still leaves the question of how one might measure detectability. 

As is my wont, I'm leaning towards a subjective measure using a continuous scale along these lines:



For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind. 

You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.

To those who claim "It's not a metric if it doesn't have a unit of measurement!", I say "So what?  It's still a useful way to understand, compare and contrast risks ... which is more important in practice than satisfying some academic and frankly arbitrary and unhelpful definition!" As shown on the sketch, we normally do assign a range of values (percentages) to the scale for convenience (e.g. to facilitate the discussion and for recording outcomes) but the numeric values are only ever meant to be indicative and approximate. Scale linearity and scientific/mathematical precision don’t particularly matter in the risk context, especially as uncertainty is an inherent factor anyway. It's good enough for government work, as they say.

Finally, circling back, 'speed of response' could add yet another dimension to the risk assessment process, or more accurately the risk treatment part of risk management. I envisage a response-speed percentage scale (naturally), ranging from 'tectonic or never' up to 'instantaneous', with an implied pressure to speed up responses, especially to certain types of incident ... sparking an interesting and perhaps enlightening discussion about those types. "Regardless of what we are actually capable of doing at present, which kinds of incidents should we respond to most or least urgently, and why is that?" ... a discussion point that we'll be bringing out in the management materials for April. 

NBlog March 1 - malware awareness update 2019

Malware (malicious software) has been a concern for nearly five – yes five – decades. It’s an awareness topic worth updating annually for three key reasons:
  1. Malware is ubiquitous – it’s a threat we all face to some extent (even those of us who don’t own or use IT equipment rely on organizations that depend on it);
  2. Malware-related risks are changing – new malware is being actively developed and exploited all the time, while technical security controls inevitably lag behind;
  3. Security awareness is vital to prevent or avoid malware infections, and to recognize and respond promptly and effectively to those that almost inevitably occur.
Last year, we focused on crypto-currency-mining Trojans, and it was ransomware the year before that. Both remain of concern today. That’s the thing with malware: new forms expand the threat horizon. Much like the universe, it never seems to shrink.
Developing engaging and accessible awareness and training content on the current state of malware is quite a challenge. Malware is a complicated and dynamic field, a seething mass of issues that are hard to pin down in the first place, and awkward to describe in relatively simple and straightforward terms. 
However, so long as malware risks remain significant, we can’t afford to ignore them. Luckily, generic control measures such as workers’ vigilance, patching, backups, incident management and business continuity management are appropriate regardless of the particular incident scenarios that may unfold.  
Antivirus software is part of the solution – a major part, admittedly, necessary but not sufficient. That’s one of several awareness messages this year.
I'm especially pleased with the 12-page 'Malware encyclopedia' in the March materials. It turned out nicely, injecting a little humor into what might otherwise have been a desperately dull and depressing module.


Read more about the latest NoticeBored module and subscribe to the service, unless you already have this security awareness and training lark all sewn up, that is - everything fully up-to-date, employees enthralled, informed and entertained, highly vigilant, extremely supportive and generally free of malware issues ...  

Feb 26, 2019

NBlog Feb 25 - TL;DR vs More details please


A substantial part our effort goes into generating worthwhile and engaging awareness and training materials for a wide range of people, some of whom are too busy, too disinterested or simply cant be bothered with lengthy pieces, whereas others enjoy and in some cases need the details. 

Focusing on a single infosec topic each month gives us the chance to address both ends of the scale. Both of them require useful information: the shorter stuff isn't simply a summary cut-down version of the long. They each have to reflect the different needs of the intended audiences, which changes their focus and style as well as the length.

Personally, my preferred approach is to delve deep and work on the detailed stuff and conceptual diagrams/models etc. first, then pull out, gradually preparing the more succinct higher-level pieces ... but in practice we usually end up spiraling. Producing the more strategic stuff involves reviewing the models and reassessing perspectives ... or something. Anyway, as I draw out the key messages, I end up revisiting and revising the detailed stuff, and back around I go.

It is a spiral, though, not a circle because the monthly delivery deadline means eventually we have to call a halt. Often there are still loose ends, things we simply don't have the time to get into right now ... but it's not hard to park them for the next time we cover the same or a related topic - which hints at another part of our approach, namely creating a completely new awareness and training module, focused on one or more loose ends left dangling from previous topics.

Talking of which, next month we'll be working on "Spotting incidents" - the detection and initial notification part of incident management, specifically. Although we've covered incidents many times before, that will be a new angle. It was prompted by the thought that the probability and impacts of incidents does not fully describe the risks: incidents that remain undetected for long periods (perhaps indefinitely) are a particularly insidious concern. 'Detectability' is therefore another factor to take into account when assessing or evaluating information risks.


Feb 24, 2019

NBlog Feb 24 - how to challenge an audit finding

Although I wrote this in the context of ISO/IEC 27001 certification audits, it applies in other situations where there is a problem with something the auditors are reporting such as a misguided, out of scope or simply wrong audit finding.

Here are some possible strategies to consider:
  • Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
  • Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
  • Gather your own evidence to strengthen your case. For example:
    • If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
    • If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
    • If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
    • Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
  • Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
  • Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
  • Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
  • Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
  • Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
  • Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
  • Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
  • Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
  • Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
  • Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
  • Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
Although that's a long shopping list, I'm sure there are other possibilities including some combination of the above. The fact is is that you have choices in how to handle such challenges: your knee-jerk response may not be ideal.

For bonus marks, you might even raise an incident report concerning the issue at hand, then handle it in the conventional manner through the incident management part of your ISMS. An adverse audit finding is, after all, a concern that needs to be addressed and resolved just like other information incidents. It is an information risk that has eventuated. You will probably need to fix whatever is broken, but first you need to assess and evaluate the incident report, then decide what (if anything) needs to be done about it. The process offers a more sensible, planned and rational response than jerking your knee. It's more business-like, more professional. I commend it to the house.

Feb 22, 2019

NBlog Feb 22 - classification versus tagging



I'm not happy with the idea of 'levels' in many contexts, including information classification schemes. The term 'level' implies a stepped progression in one dimension. Information risk and security is more nuanced or fine-grained than that, and multidimensional too.
The problems with 'levels' include:
  • Boundary/borderline cases, when decisions about which level is appropriate are arbitrary but the implications can be significant; 
  • Dynamics - something that is a medium level right now may turn into a high or a low at some future point, perhaps when certain event occurs; 
  • Context e.g. determining the sensitivity of information for deliberate internal distribution is not the same as for unauthorized access, especially external leakage and legal discovery (think: internal email); 
  • Dependencies and linkages e.g. an individual data point has more value as part of a time sequence or data set ... 
  • ... and aggregation e.g. a structured and systematic compilation of public information aggregated from various sources can be sensitive; 
  • Differing perspectives, biases and prejudices, plus limited knowledge, misunderstandings, plain mistakes and secret agendas of those who classify stuff, almost inevitably bringing an element of subjectivity to the process despite the appearance of objectivity; 
  • And the implicit "We've classified it and [maybe] done something about securing it ... so we're done here. Next!". It's dismissive. 
The complexities are pretty obvious if you think about it, especially if you have been through the pain of developing and implementing a practical classification scheme. Take a blood pressure reading, for instance, or an annual report or a system security log. How would you classify them? Whatever your answer, I'm sure I can think of situations where those classifications are inappropriate. We might agree on the classification for a particular situation, hence a specific level or label might be appropriate right there, but information and situations are constantly changing, in general, hence in the real world the classification can be misleading and unhelpful. And if you insist on narrowing the classification criteria, we're moving away from the main advantage of classification which is to apply broadly similar risk treatments to each level. Ultimately, every item needs its own unique classification, so why bother?

Another issue with classification schemes is that they over-emphasize one aspect or feature of information - almost always that's confidentiality. What about integrity, availability, utility, value and so forth? I prefer a conceptually different approach using several tags or parameters rather than single classification 'levels'. A given item of information, or perhaps a collection of related items, might usefully be measured and tagged according to several parameters such as:
  • Sensitivity, confidentiality or privacy expectations; 
  • Source e.g. was it generated internally, found on the web, or supplied by a third party?; 
  • Trustworthiness, credibility and authenticity - could it have been faked?; 
  • Accuracy and precision which matters for some applications, quite a lot really; 
  • Criticality for the business, safety, stakeholders, the world ...; 
  • Timeliness or freshness, age and history, hinting at the information lifecycle; 
  • Extent of distribution, whether known and authorized or not; 
  • Utility and value to various parties - not just the current or authorized possessors; 
  • Probability and impact of various incidents i.e. the information risks; 
  • Etc. 
The tags or parameters required depend on what needs to be done. If we're determining access rights, for instance, access-related tags are more relevant than the others. If we're worried about fraud and deception, those integrity aspects are of interest. In other words, there's no need to attempt to fully assess and tag or measure everything, right now: a more pragmatic approach (measuring and tagging whatever is needed for the job in hand) works fine.

Within each parameter, you might consider the different tags or labels to represent levels but I'm more concerned with the broader concept of taking into account a number of relevant parameters in parallel, not just sensitivity or whatever. 

All that complexity can be hidden within Gary's Little World, handled internally within the information risk and security function and related colleagues. Beyond that in the wider organization, things get messy in practice but, generally speaking, people working routinely with information "just know" how important/valuable it is, what's important about it, and so on. They may express it in all sorts of ways (not just in words!), and that's fine. They may need a little guidance here and there but I'm not keen on classification as a method for managing information risk. It's too crude for me, except perhaps as a basic starting point. More useful is the process of getting people to think about this stuff and do whatever is appropriate under the circumstances. It's one of those situations where the journey is more valuable than the destination. The analysis generates understanding and insight which are more important that the 'level'.

Feb 21, 2019

NBlog Feb 21 - victimization as a policy matter


An interesting example of warped thinking from Amos Shapir in the latest RISKS-List newsletter:

"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
Amos is talking (I believe) about national governments and laws but the same approach could be applied by authoritarian managers through corporate rules, including policies. Imagine, for instance, a security policy stating that all employees must use a secret password of at least 35 random characters: it would be unworkable in practice but potentially it could be used by management as an excuse to single-out, discipline and fire a particularly troublesome employee, while at the same time ignoring noncompliance by everyone else (including themselves, of course).

It's not quite as straightforward as I've implied, though, since organizations have to work within the laws of the land, particularly employment laws designed to protect individual workers from rampant exploitation by authoritarian bosses. There may be a valid legal defense for workers sacked in such circumstances due to the general lack of enforcement of the policy and the reasonable assumption that the policy is not in force, regardless of any stated mandate or obligations to comply ... which in turn has implications for all corporate policies and other rules (procedures, work instructions, contracts and agreements): if they are not substantially and fairly enforced, they may not have a legal standing. 

[IANAL  This piece is probably wrong and/or inapplicable. It's a thought-provoker, not legal advice.]

Feb 20, 2019

NBlog Feb 20 - policy governance

Kaspersky blogged about security policies in the context of human factors making organizations vulnerable to malware:
"In many cases, policies are written in such a difficult way that they simply cannot be effectively absorbed by employees. Instead of communicating risks, dangers and good practices in clear and comprehensive instructions, businesses often give employees multipage documents that everyone signs but very few read – and even less understand."
That is just the tip of an iceberg. Lack of readability is just one of at least six reasons why corporate security policies are so often found lacking in practice:
  • Lack of scope: ‘security policies’ are typically restricted to IT/cyber security matters, leaving substantial gaps, especially in the wider aspects of information risk and security such as human factors, fraud, privacy, intellectual property and business continuity.
  • Lack of consistency: policies that were drafted by various people at various times for various reasons, and may have been updated later by others, tend to drift apart and become disjointed. It is not uncommon to find bald contradictions, gross discrepancies or conflicts. Security-related obligations or expectations are often scattered liberally across the organization, partly on the corporate intranet, partly embedded in employment contracts, employee handbooks, union rulebooks, printed on the back of staff/visitor passes and so on. 
  • Lack of awareness: policies are passive, formal and hence rather boring written documents - dust-magnets. They take some effort to find, read and understand. Unless they are accompanied by suitable standards, procedures, guidelines and other awareness materials, and supported by structured training, awareness and compliance activities to promote and bring them to life, employees can legitimately claim that they didn’t even know of their existence - which indeed they often do when facing disciplinary action. 
  • Lack of accountability: if it is unclear who owns the policies and to whom they apply, noncompliance is the almost inevitable outcome. This, in turn, makes it risky for the organization to discipline, sack or prosecute people for noncompliance, even if the awareness, compliance and enforcement mechanisms are in place. Do your policies have specific owners and explicit responsibilities, including their promotion through awareness and training? Are people - including managers - actually held to account for compliance failures and incidents?
  • Lack of compliance: policy compliance and enforcement activities tend to be minimalist, often little more than sporadic reviews and the occasional ticking-off. Circulating a curt reminder to staff shortly before the auditors arrive, or shortly after a security incident, is not uncommon. Policies that are simply not enforced for some reason are merely worthless, whereas those that are literally unenforceable (including those where strict compliance would be physically impossible or illegal) can be a liability: management believes they have the information risks covered while in reality they do not. Badly-written, disjointed and inconsistent security policies are literally worse than useless.
Many of these issues can be traced back to lacking or inconsistent policy management processes. Policy ownership and purpose are often unclear. Even simple housekeeping activities such as version control and reviews are beyond many organizations, while policies generally lag well behind emerging issues.

That litany of issues and dysfunctional organizational practices stems from poor governance ... which intrigues me to the extent that I'm planning to write an article about it in conjunction with a colleague. He has similar views to me but brings a different perspective from working in the US healthcare industry. I'm looking forward to it.

Feb 14, 2019

NBlog Feb 14 - online lovers, offline scammers





Social engineering scams are all the rage, a point worth noting today of all days.

A Kiwi farmer literally lost the farm to a scammer he met and fell for online. 

Reading the news report, this was evidently a classic advance fee fraud or 419 scam that cost him a stunning $1.25m. 

This is not the first time I've heard about victims being drawn-in by the scammers to the extent that they refuse to accept that they have been duped when it is pointed out to them. There's probably something in the biology of our brains that leads us astray - some sort of emotional hijack going on, bypassing the normal rational thought processes.

On a more positive note, the risks associated with online dating are reasonably well known and relatively straightforward to counter. And old-school offline dating is not risk free either. 

Relationships generally are a minefield ... but tread carefully and amazing things can happen. Be careful, be lucky.

Feb 9, 2019

NBlog Feb 9 - inform and motivate

The malware encyclopedia destined for inclusion in our next awareness module is coming along nicely ...

































It's interesting to research and fun to write in an informative but more informal style than the glossary, with several decidedly tongue-in-cheek entries so far and a few graphics to break up the text.

I guess it will end up at about 20 pages, longer than usual for a general security awareness briefing but 100% on-topic. There's a lot to say about malware, being such a complex and constantly evolving threat. I hope the relaxed style draws readers in and makes them think more carefully about what they are doing without being too do-goody, too finger-wagging. Prompting changes of attitudes and behaviors is our aim, not just lecturing the troops. Awareness and training is pointless if it's not sufficiently motivational.

PS After trimming out the more obscure entries, it worked out at 11 pages plus the cover page.