Welcome to NBlog, the NoticeBored blog

The blogging will continue until morale improves

Oct 8, 2019

NBlog Oct 8 - 2020 vision

Over the weekend, I wrote about CISOs and ISMs preparing cunning strategies and requesting budgets/proposing investments

During the remainder of 2019, we will be treated/subjected to a number of predictions about what's in store for information security in the year ahead, thanks to a preponderance of Mystic Megs with unsupervised access to the Interweb, gazing wistfully into their crystal balls and pontificating. 

As with horoscopes in the tabloid rags, some of their predix will be right on the button by sheer chance in the sense that, given an ample sufficiency of poo to throw at the wall, some of it will stick. A few more informed pundits, however, will be chucking stickier poo thanks to their experience and insight. 

Trouble is, how are we to distinguish the insightful few with sticky poo from the manifold plain or polished poo propellants?

Years ago, the solution involved tracking or looking back at prior predictions to assess how accurate the pundits were ... although, as with investments, past performance is not necessarily an accurate guide to the future. It's an indicator at best.

These days, the situation is trickier still thanks to the Intarweb, social media and the global information melting-pot that turns pretty much everything into a brown sticky malodorous mess. Independent, honest, experienced, reasonably accurate soothsayers find themselves swimming in an ocean inhabited by marketing whales, a few great whites and vast shoals of me-toos who grasp desperately at any passing thought like a drowning man clutches at a log, only to wring all the life out of it.

So, for what it's worth (almost every penny!), my advice is to consider the credentials of anyone claiming to know what's ahead. Do they know what they speak of? Do they have a clue? Are they usually about right? Do they follow the latest fads, spouting clouds of meaningless drivel from their blow-holes, or are they brave enough to buck the obvious trends, say-it-like-it-is and explain themselves straightforwardly?

And then temper everything with a large dose of good ol' common sense. If your organization is taking its first baby steps into the cloud, guess what: it lacks cloud experience, hence the more extreme cloudiness is likely to be riskier for you than, say, a company that is and has been cloud-first or cloud-everything for years already and knows what it's getting itself into. In other words, choose your battles. Build on your strengths, consider and address your weaknesses. By all means get creative and explore the cutting edge stuff ... but be wary of exposing your jugular to that glinting slicey-slicey sharpness.

Don't neglect your inner-circle of trustworthy advisors, the colleagues and contacts who have proven insightful or at least good listeners in the past ... which hints at a possible strategy for 2020: work hard on bolstering and extending your personal network, ready for your 2021 strategies, proposals and budget requests. The flip side of that ocean of pundits is that it's easier than ever to find potential partners and build relationships. Perhaps even the odd blogger making sense of this turbulent world.

Oct 6, 2019

NBlog Oct 6 - a dozen infosec strategies

This Sunday morning, further to my tips on planning for 2020, prompted by "5 disruptive trends transforming cybersecurity" and fueled by some fine Columbian (coffee not coke!), I've been contemplating information risk and security strategies. Here's a dozen strategic approaches to consider:
  1. Use risk to drive security. Instead of vainly hoping to address every risk, hammer the biggest ones, tap at the middling ones and let the little'uns fend for themselves (relying on general purpose controls such as incident and business continuity management, resilience etc.). 'Hammer the biggest' means going the extra mile for 'key' or 'critical' controls addressing 'key' or 'major' or 'bet the farm' risks, and implies substantial effort to identify and evaluate the risks, as well as actually dealing with them.
  2. Make security processes as slick as possible, using automation, simplicity, repeatability etc. DevSecOps is an example of automating security to keep up/catch up with speeding cyclists. SecDevOps could be security attempting to lead the pack (good luck with that!).
  3. Develop security architectures - comprehensive, coherent, all-encompassing approaches, with solid foundations and building blocks that slot into place as the blueprint comes to life. Requires long term planning and coordination with other architectures and strategies for business, information, IT, risk, compliance, governance etc.
  4. Be business-driven. Let management govern, direct and control things, including cybersecurity, information security, risk and security, or whatever, to enable and deliver business objectives. Encourage and enable management to manage change both reactively and proactively. This strategy requires that management has a decent understanding of the risks and opportunities relating to information security, or at least is well-advised in that area (i.e. manage your managers!).
  5. Make do but improve systematically, in other words take a cold hard look at where you are now, identify the most urgent or serious issues and improvement opportunities, address them. Lather rinse repeat. This may be the only viable approach if management is not interested in being proactive in this area (which might be one of those issues worth tackling!).
  6. Use metrics - specifically, business- and risk-driven metrics - to identify and respond to pain points, trends, imbalances etc., ideally before they become issues. Requires a decent suite of relevant, trustworthy metrics, which implies clarity around the measurement objectives and methods. Also requires enough time to accumulate the data for trends analysis, and sound analysis (e.g. appropriate use of statistics). And beware surrogation.
  7. Employ 'good practices', such as ISO27k, NIST SP800, COBIT, CSA, OWASP and so on ... hinting at the practical issue of deciding which one/s to follow, and to what extent. Standards are reactive in nature, out of date by the time they are published but they generally provide a sound basis, and if used sensibly can be a useful shortcut to get basic frameworks (at least) in place. Not so useful, though, if compliance drives the organization rather than the business - another type of surrogation.
  8. Collaborate. Find and work with internal and external resources to get stuff done (implies shared goals). Maybe cloud-first or cloud-only makes perfect sense after all, for your organization - a current-day version of the old 'best of breed', 'best in class' or 'buy blue' mantras - so be sure information risk and security considerations are an integral part of the cloud adoption process. Exploit cloud security services: push security into the cloud.
  9. Focus and simplify. Stop expanding willy-nilly into the cloud without proper planning and preparation, including risk management. Develop an actual strategy, a clear map of the destination/s and routes. Prioritize resources. Find and employ the best people, methods, systems, standards, tools etc. for the most important jobs. Assemble high-performance teams, give them clear goals, motivate them and give them the space to do their thing (possibly within defined boundaries, possibly not).
  10. Fail small and often. Don't just anticipate failure, expect it. Recover. Learn. Improve. Try harder. Be experimental. Take (appropriate) risks. Invest unwisely. Default to "yes" rather than "no", ask "why not?" instead of "why?". Practice hard to become excellent at identifying and reacting to risks and opportunities of all kinds. Set things up to spot, flag and react to failures effectively and efficiently. Better still, learn from others' failures: gain without pain.
  11. Figure out and do whatever's best for your organization - perhaps some version or combination of the above or other things unique to your organization, its situation, resources, constraints and objectives. Innovate. Think much further into the future. Imagine! Master the topic. Come up with more creative/unconventional strategies, and evaluate them. Write better lists than this one. Share your thoughts through the comments.
  12. Accept defeat. Follow lamely rather than lead, or get by without a strategy. Pass the buck, exploit scapegoats. Let other suckers path-find. Scrabble desperately to implement the current so-called strategy. Hold the fort. Duck the issues. Keep your head down until your watch is over. Preserve the status quo. Do the least amount possible. Summon and wait for reinforcements. Retire or find another career. Use what little remains of your motivation and self-esteem to apply for jobs at more enlightened organizations. Up-skill. Retrain. Read more than just blogs. Think on. Good luck.

Oct 4, 2019

NBlog Oct 4 - tips on planning for 2020

The Security Executive Council is a consultancy specializing in physical security for commercial organizations. Their latest newsletter led me to a nice little piece about business cases, including this: 
Brad Brekke, SEC emeritus faculty and former Vice President of Assets Protection and Corporate Security for Target Corporation, emphasizes that the business case must be built upon a deep understanding of the business and security's role and strategy within it. "I'd recommend you conduct this exercise: Study your business. Know how it operates, how it makes money, how it's set up, what its strategy is – for instance, is it a growth strategy, an expense-driven strategy, a service-driven strategy. Know the culture and risk tolerance of your organization and know the voice of its customer," says Brekke.
That approach makes sense for any substantial strategy, change or investment proposal. All organizations exist to achieve [business] objectives, so being clear about how a proposal supports or enables those [business] objectives is a no-brainer, right?

How to do that in practice, however, may not be entirely obvious, especially to specialists/professionals deeply immersed in particular fields such as information risk and security. Our worldview naturally revolves around our own little world. We perceive things in our own terms. We are inevitably biased towards certain aspects that interest and concern us, hence we inevitably emphasize them while downplaying, ignoring or failing even to notice others. 

That's true regardless of the specialism. For instance, HR pros naturally focus on people, sociology, human behaviour and so on. Finance pros focus on dollars and financial risks. IT pros focus on computing and tech. And, guess what, CISOs and ISMs have their focal points and blind-spots too.

The same is also true of other people with whom we interact at work, including those execs who will ultimately make the big decisions about our big proposals, plus assorted managers and [other] specialists beneath who advise and influence them. We all have our interests and prejudices, our personal agendas, hot-buttons and fear-factors. Despite the title, even "general managers" didn't mysteriously parachute-in to the role out of a clear blue sky but worked their way through the educational system, the ranks and the University of Life, picking up skills and experiences along the way, shaping their personalities today.

So, when proposing something, awareness of our own biases plus those of our audiences (for there are several) presents the opportunity to counteract them on both sides. 

The SEC piece, for instance, offers this advice:
Brekke also cautions security leaders not to undervalue the importance of storytelling. Each organization has a language that resonates with management. Consider the language of the brand and the language of the organization's business as you develop the story you will tell and as you make your business case. You may find it helpful to reframe some security language to better reflect business value. For instance, because one of Target's foundational goals was to focus on the experience of the customer, conversations about shoplifting became conversations about enabling the guest experience.
That's the no-brainer business-focused approach I mentioned earlier, and fair enough: it's not unreasonable to expect everyone in an organization to share a common interest in furthering the organization's business aims. At an overall level, being business-focused makes perfect sense. However, there's more to it in that 'the organization' is, in reality, an assortment of individuals with distinct personalities. 

So, I recommend a more granular, more mature approach. Rather than simply preparing and submitting a business-like proposal then expecting 'the organization' or 'management' to approve it, consider the individual people who will make the decisions, plus those who advise and influence them. Ideally, spend quality time with them during the drafting process, explaining what you are hoping to achieve and finding out what they want or expect or fear from it. Explain things in their terms, if you can. As Brad suggests, use pertinent examples that resonate with them. Tease out their concerns, and emphasize the benefits for them and their areas of interest, plus others (it's perfectly OK to bring up the wider perspective, including opinions and concerns raised by various colleagues). Try not to leave things hanging in mid-air: where relevant, revise your proposals to take account of the feedback and let them know you have done so. Reassure them that you have genuinely responded to their suggestions - even if that means compromising or, on occasions, rejecting them due to competing pressures. This is a negotiation process, so negotiate towards agreement. If it helps, you can even quote those feedback comments, partly because of what they say and partly to demonstrate that you have both listened and reacted.

For bonus marks, collaborate with your colleagues from the outset. Develop joint proposals with other departments. Drive out extra value by optimising your approaches, addressing multiple objectives simultaneously. Work as a team.

Now is an excellent time of year to put this approach into practice as most organizations head rapidly towards the new financial year, hence strategies, initiatives, priorities and budgets are all up for discussion. If your normal approach is head-down, focused on building what you believe to be the best possible business cases and proposals in isolation, then lift your head from the page for once. Consider who your proposals will affect, and go see them for a chat - now, well before the ink is dry. I promise you, it's time well spent. You'll markedly improve your chances of success.

It works both ways too. If, say, Marketing is lining-up for a substantial investment, initiative or change of approach, get actively engaged with the formulation of their proposal concerning the information risk and security aspects. Find out what they are on about. Consider the implications. Where appropriate, push for changes and make concessions to them in return for their support for your objectives and proposals, and vice versa, all the while circling around those common business objectives. 'What's best for the business' is a particularly compelling perspective, hard to argue against. Plotting the best route is easier if everyone is heading for the same destination.

Sep 30, 2019

NBlog October - digital (cyber) forensics module released


IT systems, devices and networks can be the targets of crime as in hacking, ransomware and computer fraud. They are also tools that criminal use to research, plan and coordinate their crimes. Furthermore, criminals use technology routinely to manage and conduct their business, financial and personal affairs, just like the rest of us.
Hence digital devices can contain a wealth of evidence concerning crimes committed and the criminals behind them.
Since most IT systems and devices store security-related information digitally, digital forensics techniques are also used to investigate other kinds of incidents, figuring out exactly what happened, in what sequence, and what went wrong ... giving clues about what ought to be fixed in order to prevent them occurring again.  
It’s not as simple as you might think for investigators to gain access to digital data, then analyze it for information relevant to an incident. For a start, there can be a lot of it, distributed among various devices scattered across various locations (some mobile and others abroad), owned and controlled by various people or organizations. Some of it is volatile and doesn’t exist for long (network traffic, for instance, or the contents of RAM). Some is unreliable and might even be fake, a smoke-screen deliberately concealing the juicy bits.
A far bigger issue arises, though, if there is any prospect of using digital data for a formal investigation that might culminate in a disciplinary hearing or court case. There are explicit requirements for all kinds of forensic evidence, including digital evidence, that must be satisfied simply to use it within an investigation or present it in court. Ensuring, and being able to prove, the integrity of forensic evidence implies numerous complications and controls within and around the associated processes. They are the focus of October’s NoticeBored security awareness materials which:
  • Describe the structured, systematic process of gathering digital forensic evidence and investigating cyber-crime and other incidents involving IT;
  • Address information risks associated with the digital forensics process;
  • Prompt management to prepare or review policies and procedures in this area, training workers or contracting with forensics specialists as appropriate;
  • Encourage professionals with an interest in this area to seek and share information;
  • Discourage workers in general from interfering with and perhaps destroying forensic evidence.
Read more about the module here.  Purchase it here.

Sep 29, 2019

NBlog Sept 29 - awareness and training program design

The first task when preparing any awareness content is to determine the objectives. What are you hoping to achieve here? What is the point and purpose? What's the scope? What would success or failure even look like?

There are several possible approaches. 

You might for instance set out to raise security awareness 'in general', with no particular focus. That's a naive objective given the variety of things that fall within or touch on the realm of 'security'. Surely some aspects are more pertinent than others, more likely to benefit the workforce and hence the organization? Trying to raise awareness of everything all at once spreads your awareness, training and learning resources very thin, not least the attention spans of your audiences. It risks bamboozling people with far too much information to take in, perhaps confusing them and turning them off the whole subject. 

It's not an effective educational strategy. We know it doesn't work and yet, strangely, there are still people talking in terms of an "annual security awareness training session" as if that solves the problem. 

[Shakes head in despair, muttering incoherently]

Instead, you might identify a few topic areas that are more deserving of effort, 'just the basics' you might say. OK, that's better but now there's the issue of deciding what constitutes 'the basics'. One of the complicating, challenging  and fascinating aspects of information risk and security is the mesh of overlapping and interlocking concerns. Security isn't achieved by doing just a few things well. We need to do a lot of things adequately and simultaneously.

Take 'passwords' for example, one of the security controls that most organizations would consider basic. You could simply instruct workers on choosing passwords that meet your organization's password-related policies or standards ... but wouldn't it be better to explain why those policies and standards exist, as well as what they require? Why do we have passwords anyway? What are they for? Addressing those supplementary issues is more likely to lead to understanding and acceptance of the password rules. As you scratch beneath the surface, you'll encounter several important things relating to passwords such as:
  • access control;
  • accountability and responsibility;
  • biometrics and multi-factor authentication;
  • identification and authentication;
  • malware and hacking attacks;
  • password length and complexity;
  • password memorability and recall;
  • password sharing and disclosure;
  • password vaults;
  • phishing and other social engineering attacks;
  • the password change process ...
... and more. Similar considerations apply to any other form of 'basic' security: I challenge you to name any 'basic' security topic so narrowly-scoped that it doesn't touch or depend on related matters. 

A third approach, then, is to acknowledge those touch points and the mesh of interrelated topics, planning a sensible sequence of awareness topics that meander through the entire field. Maybe cover accountability first, then passwords, then access control ... and so on. Now you're starting to get somewhere! 

Oh but hang on, at this level of analysis there is such a variety of potential topics that the sequence takes some thought, especially as there are only so many awareness and training opportunities in the year. Planning is like plate-spinning: in order to raise awareness, you need to re-cover each topic periodically, reminding people before they forget, each awareness and training episode building on previous ones (especially the most recent and/or the most memorable). That's all very well, provided you don't let the plates fall. If your security awareness people move on, listen for the clatter of broken crockery.

A fourth approach is the NoticeBored way. Every month since 2003, we've picked a topic and gone into some depth on it. We've brought up other relevant topics but only briefly, since they are all explored in depth when their time comes. We've picked up on new topics as they emerged (making the content fresh and topical - literally), sometimes combining topics or deliberately taking different perspectives in successive passes. As plummet towards the 200th NoticeBored module in December, we've steadily accumulated a security awareness and training portfolio covering ~70 topics, all of them designed and prepared to a consistently high standard by a small team of experts. On average, every module has passed three times through the mill, meaning they are all quite stable and mature.

Aside from the topic-based monthly deliveries, there's another innovation in that the NoticeBored materials address three parallel audiences: general employees, managers and professionals. Complementing the breadth and depth of the awareness content, the three-streams lead to cultural changes across the entire organization. We think of this as socializing security within the corporation, informing the three audience groups about matters that concern them in terms they can understand, while encouraging them to interact and communicate both among and between themselves. 

With the NoticeBored monthly subscription service drawing to a close in just a few months, we're thinking about how best to continue maintaining and updating the portfolio of materials, tracking the ever-evolving field of information risk and security. We'll probably make fewer, irregular updates just a few times a year.

Meanwhile, we're gradually loading-up the SecAware eStore with additional awareness modules and ramping-up the marketing. If you need top-notch content for an effective security awareness and training program, please browse SecAware's virtual shelves and grab yourself a bargain. There's something strangely motivating about sales!

Sep 26, 2019

NBlog Sept 26 - audit strategies


I recommend treating any audit as a negotiation process with risks and opportunities* for both parties i.e. auditees and auditors. Here's why.

In respect of ISO/IEC 27001 compliance, the certification auditors are supposed to be formally checking that an ISMS complies with the standard’s formal requirements, plus information security requirements that the organization determines for its own purposes**. They are not supposed to conjure-up additional requirements out of thin air, then complain about noncompliance. However, auditors are human and make mistakes. So auditees are fully entitled to ask auditors to identify any requirements in the standard or in their corporate requirements that they say are not being fulfilled, if necessary down to the individual clause numbers and specific words from ‘27001, their policies etcBy all means discuss the wording and intent/meaning of those requirements, as well as reviewing the evidence and details of the alleged noncompliance. 

So far, that's conventional, an expected, routine part of the normal interaction between auditor and auditee. From that point, however, the process can proceed along various paths. 

The auditee could take a very hard line, focusing myopically and deliberately on strict compliance with the explicit requirements of the standard, being really tough on the auditors about that … but beware as the auditors can take just as hard a line in response, perhaps even pointing out additional minor noncompliance issues that they might otherwise have ignored. Bringing out the big compliance sticks is a viable but risky strategy. It can be tricky to back down once either party starts down this path. It tends to make the relationship between auditors and auditees highly adversarial and tough-nosed, each party treating the other as the enemy to be beaten. It’s stressful for all concerned, adding to the usual stresses of audits and certification. [Speaking as a former/reformed auditor, this may be a sign of either a na├»ve/scared or, paradoxically, a highly experienced/assertive auditee. Identifying and responding proactively to the situation as it develops is part of the auditor’s social skill set, which varies with the auditor’s experience level plus their own personality. If things escalate, it draws-in management on both sides, so each party really needs their management behind them. It’s also something that experienced auditors will have dealt-with many times (stress and challenge is very much part of the job), hence they tend to be well-practiced at it and on the front-foot, whereas auditees tend to be less well prepared and on the back-foot.]

Alternatively, the auditee could make more of an effort to understand and deal with the issues the auditor claims to have found, setting aside the pure compliance aspects (at least for now). Discuss and negotiate with the auditors, aiming towards finding mutually-acceptable solutions. Be “reasonable” about things (whatever that means!). Consider the business implications of what the auditors are saying, in particular consider whether they might just have put their finger on genuine information risks that the organization probably ought to address in some way. Focus on addressing those risks and reaching agreement on suitable responses, rather than compliance. Make and seek little concessions, respond positively and home-in on a resolution that both moves the business forward and leads to certification. Work with the auditors, each party treating the other as collaborators or colleagues with shared objectives. At the end of the day, either party can still reach for the big compliance stick if the negotiation stalls and the other party becomes stubborn, but that’s best left as a last resort option since it can lead to the same souring of the relationship. [This is generally a less stressful, less risky approach provided both parties are willing to play the game and move things forward. It helps if both parties have negotiation skills, or can get support from their managers/colleagues who do. It may take longer, though, which can be an issue if there are deadlines such as other audits or business demands. And there is inevitably some formality around this that needs to be respected. The auditors must meet their own obligations or risk losing their accreditation.]

But wait, there’s more.

The audit report, in particular the precise phrasing and wording of any adverse findings/noncompliance statements, is potentially another opportunity to clash or collaborate. Although the auditors own their report and have the final say (part of their formal independence), the auditee should have opportunities to review and discuss/respond to drafts, if appropriate challenging and ‘insisting’ that the details are factually correct. In general, the issue comes down to the facts and hence the audit evidence, which should be non-negotiable if the auditor has done a good job. The way those facts are documented, explained and interpreted is where the discussion tends to revolve. Again, both parties have their objectives/requirements, and it’s best if they negotiate a mutually satisfactory outcome and move ahead. Both parties being clear about priorities and overall objectives helps immensely.

And one last thing.

The relationship between auditor and auditee generally extends beyond an individual audit since audits are periodic. As well as the stage 1 and 2 certification audits, there are surveillance and re-certification audits to look forward to. So, the way the audit itself goes, the manner in which issues are raised, discussed and addressed, and the way audit findings and reports are resolved, is all part of the background for, and hence to some extent affects, future audits. Auditors who personally experienced or have been briefed about an intensely adversarial auditee in a previous audit are likely to anticipate a similar strategy and more aggravation on the next audit. Audit management might even consciously pre-select tough auditors who are strong in that situation for future audits, and likewise auditees might choose hard-nosed compliance specialists and negotiators to front-up their team, escalating matters. This can be the sting in the tail for auditors and auditees who have taken an unreasonably hard line in the past: it takes effort on both sides to turn things around and re-focus on more productive matters (namely the organization’s management of its information risks and security in support of business objectives), rather than the audit/certification process itself. 

--------------------

* Experienced negotiators appreciate the game-playing aspect to the typical negotiation process. Clued-up players enter the arena well-prepared, with goals and bottom-lines clarified and various game-playing strategies not just in mind but ideally refined through previous events. Each game plays out within the rules (mostly!), the players attacking and defending, trying various approaches, each pushing towards their own goals and exploiting weaknesses in the other, while gradually establishing and reaching agreement on neutral ground (hopefully!). At the end, the players depart with yet more experience under their belts, ready for another encounter. Every negotiation is a rehearsal for the next. Same thing with audits.

** ISO/IEC 27006:2015 says:

  • "Certification procedures shall focus on establishing that a client’s ISMS meets the requirements specified in ISO/IEC 27001 and the policies and objectives of the client." (clause 9.1.3.2);
  • "The audit objectives shall include the determination of the effectiveness of the management system to ensure that the client, based on the risk assessment, has implemented applicable controls and achieved the established information security objectives." (clause 9.2.1.1);
  • "In addition to evaluating the effective implementation of the ISMS, the objectives of stage 2 are to confirm that the client adheres to its own policies, objectives and procedures." (clause 9.3.1.2.1) ...
... and more. Auditees who are unclear about this, want to develop a sound, proactive strategy in preparation for their audits, or find themselves heading into a battle royale with the auditors, can study '27006 and ISO/IEC 17021-1:2015 (Conformity assessment — Requirements for bodies providing audit and certification of management systems — Part 1: Requirementsfor additional insight into the certification audit objectives, process and constraints. 

Sep 17, 2019

NBlog Sept 17 - a fraudulent fraud report?



Our next awareness module on digital forensics is coming along nicely. Today, in the course of researching forensics practices within organizations, I came across an interesting report from the Association of Certified Fraud Examiners. As is my wont, I started out by evaluating the validity of the survey on which it is based, and found this:
"The 2018 Report to the Nations is based on the results of the 2017 Global Fraud Survey, an online survey opened to 41,573 Certified Fraud Examiners (CFEs) from July 2017 to October 2017. As part of the survey, respondents were asked to provide a narrative description of the single largest fraud case they had investigated since January 2016. Additionally, after completing the survey the first time, respondents were provided the option to submit information about a second case that they investigated.
Respondents were then presented with 76 questions to answer regarding the particular details of the fraud case, including information about the perpetrator, the victim organization, and the methods of fraud employed, as well as fraud trends in general. (Respondents were not asked to identify the perpetrator or the victim.) We received 7,232 total responses to the survey, 2,690 of which were usable for purposes of this report. The data contained herein is based solely on the information provided in these 2,690 survey responses."
"2018 Report to the Nations", ACFE (2018)
OK so more than half the submitted responses were deemed unusable. That's a lot more rejects than I would normally expect for a survey which could be good, bad or indifferent: 

  • It's good if they were excluded for legitimate reasons such as being patently incomplete, inaccurate, out of scope or late - like spoiled votes in an election; 
  • It's bad (surprising and disappointing) if they were excluded illegitimately such as because they failed to support or refute some working hypothesis or prejudice;
  • It's indifferent if they were excluded for purely practical reasons e.g. they ran out of time to complete the analysis. Hopefully they used an unbiased sampling technique to trim down the data though. Perhaps the unusable responses were simply lost or corrupted for some reason.

Unfortunately, the reasons for exclusion aren't stated in the report, which to me is an unnecessary and avoidable flaw. We're reduced to guesswork. That they excluded so many responses could for instance indicate that the survey team was unusually cautious, excluding potentially as well patently dubious submissions. It could be that the survey method was changed for some reason during the survey, and the team decided to exclude those before and/or after the chosen method was used (begging further questions about what changed and how they chose the method/s).

The fact that this report comes from the ACFE strongly suggests that both the analytical methods and the team are trustworthy. Personal integrity is essential to be a professional fraud examiner, a fundamental requirement. Furthermore, they have at least disclosed the number of responses used and provide additional details in the report about the respondents. So, on balance, I'm willing to trust the report: to be clear, I do NOT think it is fraudulent! In fact, with 2,690 responses, the findings carry more weight than most vendor-sponsored "surveys" (advertisements) that I've criticized several times before.

Moving forward, I'm exploring the findings for tidbits relevant to security awareness programs, doing my level best to discount the ridiculous "infographics" they've used in the report - another unnecessary and avoidable source of bias, in my jaundiced opinion. Yes, the way metrics are reported does influence their interpretation and hence value. And no, I don't think it's necessary to resort to gaudy crayons to put key points across. Some of us aren't scared by lists, tables and graphs.

Sep 13, 2019

NBlog Sept 13 - ISO27k ambiguities


ISO/IEC 27001 concerns at least* two distinct classes of risk - ISMS risks and information risks** - causing confusion. With hindsight, the ISO/IEC JTC1 mandate to require a main-body section ambiguously titled "Risks and opportunities" in all the certifiable management system standards was partly to blame for the confusion, although the underlying issue pre-dates that decision: you could say the decision forced the U-boat to the surface.

That is certainly not the only issue with '27001. Confusion around the committee's and the standard's true intent with respect to Annex A remains to this day: some committee members, users and auditors believe Annex A is a definitive if minimalist list of infosec controls, hence the requirement to justify Annex A exclusions ... rather than justify Annex A inclusions. It is strongly implied that Annex A is the default set. In the absence of documented and reasonable statements to the contrary, the Annex A controls are presumed appropriate and necessary ... but the standard’s wording is quite ambiguous, both in the main body clauses and in Annex A itself.

In ISO-speak, the use of ‘shall’ in "Normative" Annex A indicates mandatory requirements; also, main body clause 6.1.3(c) refers to “necessary controls” in Annex A – is that ‘necessary for the organization to mitigate its information risks’ or ‘necessary for compliance with this standard and hence certification’?

Another issue with '27001 concerns policies: policies are mandated in the main body and recommended in Annex A. I believe the main body is referring to policies concerning the ISMS itself (e.g. a high-level policy - or perhaps a strategy - stating that the organization needs an ISMS for business reasons) whereas Annex A concerns lower-level information security-related policies … but again the wording is somewhat ambiguous, hence interpretations vary (and yes, mine may well be wrong!). There are other issues and ambiguities within ISO27k, and more broadly within the field of information risk and security management.

Way down in the weeds of Annex A, “asset register” is an ambiguous term comprised of two ambiguous words. Having tied itself in knots over the meaning of “information asset” for some years, the committee eventually reached a truce by replacing the definition of “information asset” with a curious and unhelpful definition of “asset”: the dictionary does a far better job of it! In this context, "register" is generally understood to mean some sort of list or database ... but what are the fields and how much granularity is appropriate? Annex A doesn't specify.

But wait, there’s more! The issues extend beyond '27001. The '27006 and '27007 standards are (I think!) intended to distinguish formal compliance audits for certification purposes from audits and reviews of the organization’s information security arrangements for information risk management purposes. Aside from the same issue about the mandatory/optional status of Annex A, there are further ambiguities tucked away in the wording of those standards, not helped by some committee members’ use of the term “technical” to refer to information security controls, leading some top open the massive can-o-worms labelled “cyber”!

Having said all that, we are where we are. The ISO27k standards are published, warts and all. The committee is doing its best both to address such ambiguities and to maintain the standards as up-to-date as possible, given the practical constraints of reaching consensus among a fairly diverse global membership using ISO’s regimented and formal processes, and the ongoing evolution of this field. Those ambiguities can be treated as opportunities for both users and auditors to make the best of the standards in various contexts, and in my experience rational negotiation (a ‘full and frank discussion’) will normally resolve any differences of opinion between them. I’d like to think everyone is ultimately aligned on reaching the best possible outcome for the organization, meaning an ISMS that fulfills various business objectives relating to the systematic management of information risks. 


* I say ‘at least’ because a typical ISMS touches on other classes of risk too (e.g. compliance risks, business continuity risks, project/programme management risks, privacy risks, health and safety risks, plus general commercial/business risks), depending on how precisely it is scoped and how those risk classes are defined/understood. 

** I’ve been bleating on for years about replacing the term “information security risk”, as currently used but not defined as such in the ISO27k standards, with the simpler and more accurate “information risk”.  To me, that would be a small but significant change of emphasis, reminding all concerned that what we are trying to protect - the asset - is, of course, information. I’m delighted to see more people using “information risk”. One day, maybe we’ll convince SC27 to go the same way!

Sep 12, 2019

NBlog Sept 12 - metrics lifecycle management


This week, I'm thinking about management activities throughout the metrics lifecycle.

Most metrics have a finite lifetime. They are conceived, used, hopefully reviewed and maybe changed, and eventually dropped or replaced by something better. 

Presumably weak/bad metrics don't live as long as strong/good ones - at least that's a testable hypothesis provided we have a way to measure and compare the quality of different metrics (oh look, here's one!).

Ideally every stage of a metric's existence is proactively managed i.e.:
  • New metrics should arise through a systematic, structured process involving analysis, elaboration and creative thinking on how to satisfy a defined measurement need: that comes first. Often, though, the process is more mysterious. Someone somehow decides that a particular metric will be somewhat useful for an unstated, ill-defined and barely understood purpose;
  • Potential metrics should be evaluated, refined, and perhaps piloted before going ahead with their implementation. There are often many different ways to measure something, with loads of variations in how they are analyzed and presented, hence it takes time and effort to rationalize metrics down to a workable shortlist leading to final selection. This step should take into account the way that new or changed metrics will complement and support or replace others, taking a 'measurement system' view. Usually, however, this step is either skipped entirely or superficial. In my jaundiced opinion, this is the second most egregious failure in metrics management, after the previous lack of specification;
  • Various automated and manual measurement activities operate routinely during the working life of a metric. These ought to be specified, designed, documented, monitored, controlled and directed (in other words managed) in the conventional manner but rarely are. No big deal in the case of run-of-the-mill metrics which are simple, self-evident and of little consequence, but potentially a major issue (an information risk, no less) for "key" metrics supporting vital decisions with significant implications for the organization;
  • The value of a metric should be monitored and periodically reviewed and evaluated in terms of its utility, cost-effectiveness etc. That in turn may lead to adjustments, perhaps fine-tuning the metric or else a more substantial change such as supplementing or dropping it. More often (in my experience) nobody takes much interest in a metric until/unless something patently fails. I have yet to come across any organization undertaking 'preventive maintenance' on its information risk and security metrics, or for that matter any metrics whatsoever - at least, not explicitly and openly. 
  • If a metric is to be dropped (retired, stopped), that decision should be made by relevant management (the metric's owner/s especially), taking account of the effect on management information and any decision-making that previously relied upon it ... which implies knowing what those effects are likely to be. In practice, many metrics circulate without anyone being clear about who owns or uses them, how and what for. It's a mess.
Come on, this is hardly rocket surgery. Information risk and security metrics are relatively recent additions to the metrics portfolio so it's not even a novel issue, and yet I feel like I'm breaking new ground here. Oh oh.

I should probably research fields such as finance and engineering with mature metrics, for clues about good metrics management practices that may be valuable for the information risk and security field.

Sep 11, 2019

NBlog Sept 11 - what it means to be risk-driven


Since ISO27k is [information] risk-driven, poor quality risk management is a practical as well as a theoretical problem. 

In practical terms, misunderstanding the nature of [information] risk, particularly the ‘vulnerability’ aspect, leads to errors and omissions in the identification, analysis and hence treatment of [information] risks. The most common issue I see is people equating ‘lack of a control’ with ‘vulnerability’. To me, the presence or absence of a control is quite distinct from the vulnerability, in that vulnerability is an inherent weakness or flaw in something (e.g. an IT system, an app, a process, a relationship, contract or whatever. Even a control has vulnerabilities, yet we tend to forget or discount or simply ignore the fact that controls aren’t perfect: they can and do fail in practice, with several information risk management implications). Think about it: when was the last time you seriously considered the possibility that a control might fail? Did you identify, evaluate and treat that secondary risk, in a systematic and formal manner … or did you simply get on with things informally? Have you ever done a risk analysis on your “key controls”? Do you actually know which of your organization’s controls are “key”, and why? That's a bigger ask than you may think. Try it and you'll soon find out, especially if you ask your colleagues for their inputs.

In theoretical terms, risk is all about possibilities and uncertainties i.e. probability. Using simplified models with defined values, it may be technically possible to calculate a precise probability for a given situation under laboratory conditions, but that doesn’t work so well in the real world which is more complex and variable, involving factors that are partially unknown and uncontrolled. We have the capability to model groups of events, populations of threat actors, types of incident etc. but accurately predicting specific events and individual items is much harder, verging on impossible in practice. So even extremely careful, painstaking risk analysis still doesn’t generate absolute certainty. It reduces the problem space to a smaller area (which is good!), but not to a pinpoint dot (such precision that we would know what we are dealing with, hence we can do precisely the right things). What’s more, ‘extremely careful’ and ‘painstaking’ implies slow and costly, hence the approach is generally infeasible for the kinds of real-world situations that concern us. Our risk management resources are finite, while the problem space is large and unbounded. The sky is awash with risk clouds, and they are all moving!

Complicating things still further, we are generally talking about ‘systems’ involving human beings (individuals and organizations, teams, gangs, cabals and so on), not [just] robots and deterministic machines. Worse, some of the humans are actively looking to find and exploit vulnerabilities, to break or bypass our lovely controls, to increase rather than decrease our risk. The real-world environment or situation within which information risks exist is not just inherently uncertain but, in part, hostile. 

So, in the face of all that complexity, there is obviously a desire/need to simplify things, to take short cuts, to make assumptions and guesses, to do the best we can with the information, time, tools and other resources at our disposal. We are forced to deal with priorities and pressures, some self-imposed and some imposed upon us. ISO27k attempts to deal with that by offering ‘good practices’ and ‘suggested controls’. One of the ‘good practices’ is to identify, evaluate and treat [information] risks systematically within the real-world context of an organization that has business objectives, priorities and constraints. We do the best we can, measure how well we’re doing, and seek to improve over time.

At the same time, despite the flaws, I believe risk management is better than specified lists of controls. The idea of a [cut down] list of information security controls for SMEs is not new e.g. “key controls” were specifically identified with little key icons in the initial version of BS7799 I think, or possibly the code of practice that preceded it. That approach was soon dropped because what is key to one organization may not be key to another, so instead today’s ISO27k standards promote the idea of each organization managing its own [information] risks. The same concerns apply to other lists of ‘recommended’ controls such as those produced by CIS, SANS, CSA and others, plus those required by PCI-DSS, privacy laws and other laws, regs and rulesets including various contracts and agreements. They are all (including ISO27k) well-meaning but inherently flawed. Better than nothing, but imperfect. Good practice, not best practice.

The difference is that ISO27k provides a sound governance framework to address the flaws systematically. It’s context-dependent, an adaptive rather than fixed model. I value that flexibility.