Welcome to NBlog, the NoticeBored blog

Like the finer things in life, quality trumps quantity.

Oct 22, 2016

A little something for the weekend, sir?

The following bullet-points were inspired by another stimulating thread on the ISO27k Forum, this one stemming from a discussion about whether or not people qualify as "information assets", hence ought to be included in the information asset inventory and information risk management activities of an ISO27k ISMS. It's a crude list of people-related information risks:

  • Phishing, spear-phishing and whaling, and other social engineering attacks targeting trusted and privileged insiders;
  • ‘Insider threats’ of all sorts – bad apples on the payroll or at least on the premises, people who exploit information gained at work, and other opportunities, for personal or other reasons to the detriment of the organization;
  • ‘Victims’ – workers who are weak, withdrawn and easily (mis)lead or coerced and exploited by other workers or outsiders;
  • Reliance on and loss of key people (especially “knowledge workers”, creatives and lynch-pins such as founders and execs) through various causes (resignation/retirement, accidents, sickness and disease, poaching by competitors, demotivation, redundancy, the sack, whatever);
  • Fraud, misappropriation etc., including malicious collaboration between groups of people (breaking divisions of responsibility);
  • Insufficient creativity, motivation, dynamism and buzz relative to competitors including start-ups (important for online businesses);
  • Excessive stress, fragility and lack of resilience, with people, teams, business units and organizations operating “on a knife edge”, suboptimally and at times irrationally;
  • Misinformation, propaganda etc. used to mislead and manipulate workers into behaving inappropriately, making bad decisions etc.;
  • Conservatism and (unreasonable) resistance to change, including stubbornness, political interference, lack of vision/foresight, unwillingness to learn and improve, and excessive/inappropriate risk-aversion;
  • Conversely, gung-ho attitudes, lack of stability, inability to focus and complete important things, lack of strategic thinking and planning, short-term-ism and excessive risk-taking;
  • Bad/unethical/oppressive/coercive/aggressive/dysfunctional corporate cultures, usually where the tone from the top is off-key;
  • Political players, Machiavellian types with secret agendas who scheme and manipulate systems and people to their personal advantage and engage in turf wars, regardless of the organization as a whole or other people;
  • Incompetence, ignorance, laziness, misguidedness and the like – people not earning their keep, including those who assume false identities, fabricate qualifications and conceal criminality etc., and incompetent managers making bad decisions;
  • Moles, sleepers, plants, industrial spies – people deliberately placed within the organization by an adversary for various nefarious purposes, or insiders ‘turned’ through bribery, coercion, radical idealism or whatever;
  • People whose personal objectives and values do not align with corporate objectives and values, especially if they are diametrically opposed;
  • Workers with “personal problems” including addictions, debts, mental illness, relationship issues and other interests or pressures besides work;
  • Other ‘outsider threats’ including, these days, the offensive exploitation of social media and social networks to malign, manipulate or blackmail an organization.

It's just a brain-dump, a creative outpouring with minimal structure. Some of the risks overlap and could probably be combined (e.g. there are several risks associated with the corporate culture) and the wording is a bit cryptic or ambiguous in places. I'm quite sure I've missed some. Maybe one day I will return to update and sort it out. Meanwhile, I'm publishing it here in its rough and ready form to inspire you, dear blog reader, to contemplate your organization's people-related information risks this weekend, and maybe post a comment below with your thoughts.

For the record, I believe it is worthwhile counting workers as information assets and explicitly addressing the associated information risks such as those listed above. You may or may not agree - your choice - but if you don't, that's maybe another people-related risk to add to my list: "Naivete, unawareness, potentially unrealistic or dismissive attitudes and unfounded confidence in the organization's capability to address information risks relating to people"!

Have a good weekend,

Oct 13, 2016

There must be 50 ways ...

Over on the ISO27k Forum today, a discussion on terminology such as 'virus', 'malware', 'antivirus', 'advanced threat prevention' and 'cyber' took an unexpected turn into the realm of security control failures.

Inspired by a tangential comment from Anton Aylward, I've been thinking about the variety of ways that controls can fail:
  1. To detect, prevent, respond to and/or mitigate incidents, attacks or indeed failures elsewhere (a very broad overarching category!);
  2. To address the identified risks at all, or adequately (antimalware is generally failing us);
  3. To be considered, or at least taken seriously (a very common failing I'm sure - e.g. physical and procedural control options are often neglected, disregarded or denigrated by the IT 'cybersecurity' techno crowd);
  4. To do their thing cost-effectively, without unduly affecting achievement of the organization's many other objectives ("Please change your password again, only this time choose a unique, memorable, 32 character non-word with an upside-down purple pictogram in position 22 and something fishy towards the end, while placing your right thumb on the blood sampling pricker for your DNA fingerprint to be revalidated");
  5. To comply with formally stated requirements and obligations, and/or with implied or assumed requirements and expectations (e.g. 'privacy' is more than the seven principles);
  6. Prior to service (flawed in design or development), in service (while being used, maintained, updated, managed and changed, even while being tested) or when retired from service (e.g. if  they are so poorly designed, so tricky to use/manage or inadequately documented that they are deprecated, even though a little extra attention and investment might have made all the difference, and especially if not being replaced by something better);
  7. As a result of direct, malicious action against the controls themselves (e.g. DDoS attacks intended to overwhelm network defenses and distract the analysts, enabling other attacks to slip past, and many kinds of fraud);
  8. When deliberately or accidentally taken out of service for some more or less legitimate reason;
  9. When forgotten, when inconvenient, or when nobody's watching (!);
  10. As an accidental, unintentional and often unrecognized side-effect of other things (e.g. changes elsewhere that negate something vital or bypass/undermine the controls);
  11. Due to natural causes (bad weather, bad air, bad hair - many of us have bad hair days!);
  12. At the worst possible moment, or not;
  13. Due to accidents (inherently weak or fragile controls are more likely to break/fall apart or be broken);
  14. To respond adequately to material changes in the nature of the threats, vulnerabilities and/or business impacts that have occurred since the risk identification/analysis and their design (e.g. new modes or tools for attack, different, more determined and competent attackers, previously unrecognized bugs and flaws, better control options ...);
  15. Due to human errors, mistakes, carelessness, ignorance, misguided action, efforts or guidance/advice etc. (another broad category);
  16. Gradually (obsolescence, 'wearing out', performance/capacity degradation);
  17. Individually or as a set or sequence (like dominoes);
  18. Due to being neglected, ignored and generally unloved (they wither away like aging popstars);
  19. Suddenly and/or unexpectedly (independent characteristics!);
  20. By design or intent (e.g. fundamentally flawed crypto 'approved' by government agencies for non-government and foreign use);
  21. Hard or soft, open or closed, secure or insecure;
  22. Partially or completely;
  23. Temporarily or permanently (just the once, sporadically, randomly, intermittently, occasionally, repeatedly, frequently, 'all the time' or forever);
  24. Obviously, sometimes catastrophically or spectacularly so when major incidents occur ... but sometimes ...
  25. Silently without the failure even being noticed, at least not immediately.

That's clearly quite a diverse list and, despite its length, I'm afraid it's not complete! 

The last bullet - silent or unrecognized control failures - I find particularly fascinating. It seems to me critical information risks are usually mitigated with critical information security controls, hence any failures of those controls (any from that   l o n g   list above) are also critical.  Generally speaking, we put extra effort into understanding such risks, designing/selecting what we believe to be strong controls, implementing and testing them carefully, thoroughly etc., but strangely we often appear to lose interest at the point of implementation when something else shiny catches our beady eye. The operational monitoring of critical controls is quite often weak to nonexistent (perhaps the occasional control test). 

I would argue, for instance, that some security metrics qualify as critical controls, controls that can fail just like any other. How often do we bother to evaluate and rank our metrics according to criticality, let alone explicitly design them for resilience to reduce the failure risks?

I appreciate I'm generalizing here: some critical controls and metrics are intensely monitored. It's the more prevalent fire-and-forget kind that worries me, especially if nobody had the foresight to design-in failure checks, warning signs and the corresponding response procedures, whether as part of critical controls or more generally as a security management and contingency approach.

Good luck finding any of this in ISO27k, by the way, or indeed in other information security standards and advisories. There are a few vague hints here and there, a few hooks that could perhaps be interpreted along these lines if the reader was so inclined, but hardly anything concrete or explicit ... which itself qualifies as a control failure, I reckon!  It's a blind-spot.


PS  There's a germ of an idea here for a journal article, perhaps even a suggestion to SC 27 for inclusion in the ISO27k standards, although the structure clearly needs attention. More thought required. Comments very welcome. Can we maybe think up another 25 bullets in homage to Paul Simon's "Fifty ways to leave your lover"?

Oct 8, 2016

Marketing or social engineering?

Electronics supplier RS Online sent me an unsolicited promotional mailing in the post this week, consisting of a slimline USB stick mounted in a professionally printed cut-out card:

Well, it looks like something from RS' marketing machine.  It has their branding, images of the kinds of tools they sell and a printed URL to the RS website.  But the envelope has been modified ...

The printed sticker stamp top right has been crudely redacted with a black marker pen plus two further sticky labels, and 'postage paid' has been printed lower left, allegedly by the Hong Kong post office.  [I put the blue rectangle over my address.]

A week ago, we released a security awareness module on human factors in information security, including social engineering. Among other things, we discussed the risk of malware distributed on infectious USB sticks, and modified USB hardware that zaps the computer's USB port. The notes to a slide in the awareness seminar for management said this:
What would YOU do if you found a USB stick in your mailbox (at home or at work), or in the street, in the parking lot, in a corridor or sitting on your desk? 
In tests, roughly 50% of people plug found USB sticks into their computers.  A few of them may not care about the security risks (such as virus infections or physical damage that can be caused by rogue USB sticks), but most probably don’t even think about it – security doesn’t even occur to them. Maybe they simply don’t know that USB sticks can be dangerous.
Providing information about the dangers is straightforward: we can (and do!) tell people about this stuff through the awareness program.  But convincing them to take the risks seriously and behave more responsibly and securely is a different matter.  The awareness program needs to motivate as well as inform.  
The accompanying management briefing paper said:

It is possible that the USB stick carries malware, whether it truly originates from RS Online's marketing department in Hong Kong, or was intercepted and infected en route to me, or is a total fabrication, a fake made to look like a fancy piece of marketing collateral. I didn't request it from RS, in fact I've done no business with them for ages. The risk to loading the USB stick may be small ... but the benefit of being marketed-at is even less, negligible or even negative, so on balance it will be put through the office shredder.  It's a risk I'm happy to avoid.

Gary (Gary@isect.com)

PS  The title of this piece is ironic.  Marketing IS social engineering.

Oct 2, 2016

People protecting people ... against people

We've just delivered the next block of security awareness materials to NoticeBored subscribers, some 210 Mb of freshly-minted MS Office content on the human side of information security.

The module covers the dual aspects of people-as-threats and people-as-controls. It's all about people.

The threats in this domain include social engineers, phishers, scammers and fraudsters, while controls include security awareness and training, effective incident response procedures and various other manual and administrative activities, supplemented with a few specific cybersecurity or technical and physical controls.

Whereas the awareness program has covered phishing and spear-phishing several times before, our research led us to emphasize "whaling" this time around. Whalers use social engineering techniques to dupe financial controllers and procurement professionals into making massive multi-million-dollar payments from fat corporate bank accounts into the criminals' money laundering machinery, where it promptly disappears as if by magic - not the entertaining kind of stage show magic where the lady we've just seen being sawn in half emerges totally unscathed from the box, more the distinctly sinister tones of black magic involving chicken body parts and copious blood.  

In comparison to ordinary phishing, whaling attacks capture fewer but much bigger phish for a comparable amount of investment, effort and risk by the fraudsters. We are convinced it is a growing trend. Luckily, there are practical things that security-conscious organizations can do to reduce the risk, with strong security awareness being top of the list. As with all forms of information security, we accept that widespread security awareness (a 'security culture') is an imperfect control but it sure beats the alternative. What's more, awareness is much more cost-effective than most technological controls, especially in respect of social engineering and fraud. Artifical intelligence systems capable of spotting and responding to incidents in progress are under development or in the fairly early stages of adoption by those few organizations which can afford the technology, the support and the risks that inevitably accompany such complex, cutting-edge systems. In time, the technology will advance, and so will the threat. Security awareness will remain an essential complement, whatever happens. 

If building your security culture is something you'd love to do, if only you had the time and skills to do it, get in touch. Our people are keen to help your people.


Sep 28, 2016

ISO27k Conference, San Francisco

I'm at the 27k Summit for the Americas ISO27k conference at the South Francisco Conference Center near San Francisco airport this week, hoping to meet you!

The conference has several parallel themes and streams, including:
  • Getting started with ISO27k - for people who want to get into this stuff
  • Metrics - for people who need to measure and improve this stuff
  • Cloud security and IoT - hot topics
  • Compliance - a meta-theme since laws, regs and standards compliance is a strong driver for all the above
If I have time I'll update this post with info as the conference proceeds ....
  • Jim Reavis from the Cloud Security Alliance gave a keynote about the proliferating cloud and IoT systems, globally expanding. CSA's CCM compliance/controls mapping is well worth looking at, while the CSA STAR program is a popular certification scheme for cloud providers.
  • Dan Timko from Cirrity explained the ISO27k ISMS implementation and certification process, including the pre-certification followed a few months later by the stage 1 audit and just 5 weeks later the 'real' stage 2 certification audit. Most of the implementation effort went into documentation - documenting their policies and existing processes. For example, informal meetings 'didn't happen' if there was no record to prove it to the auditors, so meeting minutes etc. are much more common now.
  • Richard Wilshire from Zygma gave a brief introduction to the forthcoming thoroughly revised version of ISO/IEC 27004 on metrics (called 'measures' or 'measurements' in ISO-speak: 'metrics' is a forbidden word!) supporting the ISMS specified in ISO/IEC 27001. He covered the basic questions about metrics e.g. why measure (for accountability and to drive ISMS performance in the right direction, and for compliance with 27001 clause 9.1 of course), what to measure (mostly the status of systems, controls and processes), when to measure (periodic data generation, analysis and reporting, plus ad hoc or event-driven metrics with analysis and reporting triggered by events or threshold values), who measures (several part-time roles suggested in the standard). The new version of 27004 should hopefully fall off the ISO glacier some time next year.
  • Walt Williams from Lattice explained about developing metrics for business needs, not just for ISO27k compliance reasons. Setting goals helps e.g. a commonplace goal such as having zero privacy incidents directly suggests a simple metric. Reviewing goals and metrics drives improvement in your metrics.
  • Gary Hinson from IsecT (me!) spoke about using GQM and PRAGMATIC to select/design, improve and get value from security metrics, in the ISO27k context, meaning information security for business sake. It seems to me that 'security metrics' are too often based around the availability of data generated automatically by technical security controls such as antivirus systems and firewalls, with little obvious relevance to the business. Tech-driven security metrics are not valued by general managers, whereas business-driven security metrics are right on-topic.
  • Michael Fuller from Coalfire talked about ISO/IEC 27018, a standard about adapting/using the controls from ISO/IEC 27002 to ensure privacy for public cloud services. CSA STAR got another mention as a structured way of not just putting appropriate controls in place, but in an assured/certifiable form (with 3 levels, the lowest of w hich I believe is 'just' an assertion of compliance).
  • Jorge Lozano from PwC addressed the design of metrics concerning performance of an ISO27k ISMS, based on the measurement requirement specified in 27001 and the metrics advice in 27004. He outlined a few example metrics similar to those appended to 27004, in a tabular format describing, on one screen per metric, its purpose, the way it is measured, and defined objectives or goals (target values and timescales). He then showed how the example metrics might be reported. Jorge recommended using risk-driven metrics because management understands risk. [I would argue that metrics should be business driven for the same reason, but in practice these are similar and complementary approaches.]
  • Sumit Kalra from bpmcpa spoke about using ISO27k for compliance with multiple obligations, from the perspective of a compliance auditor. Sumit argued that all today's [information security related] compliance requirements are fundamentally the same, with relatively minor differences in the details but 'a structured approach' in common, hence it doesn't particularly matter which way you approach the process.
  • Amit Sharma from Amazon Web Services briefly introduced AWS but mostly spoke about AWS security. Issues include: visibility (clouds are, well, cloudy!); security controls (e.g. customers should use data encryption, AWS or customers can manage private keys); auditability and monitoring (of manual and automated activities behind the scenes, and security status); tech complexity and 'polymorphism' (ongoing infrastructural changes are challenging for customers, especially for agile e.g. DevOps companies making frequent releases); compliance and regulatory interest (e.g. ISO/IEC 27001, PCI, HIPAA & other certifications); planning and coordinating stuff involves collaboration between multiple teams and takes time and management. Customers who don't use all the automated tools for reprovisioning etc. but do stuff manually can cause problems for AWS [they lose some control - the struggle between AWS and customers to control the IT environment resembles that between traditional IT departments and 'the business']. Standardization helps (e.g. sensible defaults, templates) plus automation.
  • David Cannon from CertTest spoke about a cookie-cutter approach to quickly rolling out secure platforms and apps, which he called "an ISMS" with a very narrow scope (the narrower the better, it seems ... if your goal is to hoodwink management or business buyers, that is).
  • Alan Calder from IT Governance spoke on using ISO27k for GDPR and NIS compliance i.e. privacy/data protection (for the EU, including service providers serving EU clients) coming into effect in May 2018. Alan gave a good background briefing about how the EU as a whole governs privacy for EU citizens, and on the forthcoming regs ... with citizen rights, compensation and fines up to 20 million Euros or 4% of global turnover (!), and fundamental privacy principles (as opposed to mandating explicit controls and tick-box compliance). The principles include informed consent, data protection, the right to be forgotten, data breach reporting within 72 hours etc. Alan mentioned the lapsed Safe Harbor and forthcoming Privacy Shield agreements between the EU and USA.
  • Rob Giffin from Avalution Consulting presented on business continuity, using ISO 22301 (and other standards in the series) along with other management systems including an ISO27k ISMS (hence synergies mean collaboration is mutually beneficial). The implementation activities are similar e.g. clarify the goals of BCM (in business activity terms), the scope, the resources, the business contacts, the plans, the support tools ... 
Overall, the conference was a melting pot for ISO27k-related topics and professionals in the field, both greybeards and newbies. It was good to see so much interest in the standards, and so much free exchange of information. As with other conferences, the presentations were valuable and so were the off-line discussions and contacts with peers and friends, old and new.


Sep 21, 2016

Socializing information security

In researching and preparing October's NoticeBored security awareness module, we've wandered away from the well-beaten-track into what is, for us at least, previously uncharted territory. You could say we're going off-piste.

Our topic concerns the human aspects of information security - a core area for any decent security awareness program and one that we bring up frequently, including a dedicated awareness module refreshed annually. We've always deliberately taken a broad perspective, exploring social engineering, social media, social networking and so on. 

This year, along with the conventional awareness stuff on phishing (of course) plus other scams, cons and frauds, we'll be lifting the covers on how the criminal black hats and other adversaries exploit both their own and our social networks. 

That train of thought leads naturally in to counteracting the power of criminal organizations through leveraging various white hat equivalents, both within our organizations (e.g. the idea of proactively recruiting everyone to the information security team, through creative security awareness outreach - an approach we call 'socializing information security') and without (e.g. leveraging professional membership bodies such as ISSA and ISACA, plus local peer groups, plus industry special interest groups, plus all manner of online communities ... and blogs not unlike this one).  

I hope you're making good use of myriad opportunities to share information, discuss things and learn new stuff from others in this field. Living in rural New Zealand - almost literally in a field, surrounded by far more sheep than people - I'd be lost without access to the global infosec communities into which I plug myself on a daily basis. 

The thing is, information security without information isn't security.


Sep 20, 2016

CIS Critical Security Controls [LONG]

Today I've been nosing through the latest 6.1 version of the CIS Critical Security Controls for Effective Cyber Defense, described as "a concise, prioritized set of cyber practices created to stop today’s most pervasive and dangerous cyber attacks".

In reality, far from being concise, it is a long shopping list of mostly IT/technical security controls, about 100 pages of them, loosely arranged under 20 headings. There are literally hundreds of controls, way more than the '20 critical controls' mentioned although obviously 'Implement the 20 critical controls' sounds a lot more feasible than 'Implement hundreds of tech controls, some of which we believe are critical for cyber defense (whatever that is)'!

The selection of controls is evidently driven by a desire to focus on what someone believes to be the key issues:
The CIS Controls embrace the Pareto 80/20 Principle, the idea that taking just a small portion of all the security actions you could possibly take, yields a very large percentage of the benefit of taking all those possible actions
There is no backing or evidence behind that bald assertion in the document, nor on the introductory page on the CIS website - but, hey, it's a nice idea, isn't it? "We only need to do these 20 things, possibly only the first 5, to be cyber-secure!". 

Yeah, right. Welcome to cloud-cuckoo land. Security doesn't work that way. Assuming that the bad guys are going to give up and go away if at first they don't succeed is almost unbelieveably naive. They are persistent buggers. Some see it as an intellectual challenge to find and exploit the chinks in our armor. If anything, closing the gaping holes makes it more fun to spot the remaining vulnerabilities ... and if all you have done is to implement 20 'critical controls', you are asking for trouble.

As to inferring that CIS has identified precisely the 'small proportion of all the security actions' which will generate 'a very large percentage of the benefit', well I leave you to ponder the meaning of the Pareto principle, and whether we are being duped into thinking 20 is a magic number. Personally, I doubt it's even remotely similar to the true value.

Naturally I looked to see what they advise in the way of security awareness, and duly found Critical Security Control 17:
CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps
For all functional roles in the organization (prioritizing those mission-critical to the business and its security), identify the specific knowledge, skills, and abilities needed to support defense of the enterprise; develop and execute an integrated plan to assess, identify gaps, and remediate through policy, organizational planning, training, and awareness programs.
The recommendation is for training to fill gaps in knowledge, skills and abilities, implying specific/targeted training of specific individuals addressing particular technical security weaknesses. That, to me, is appropriate for those relatively few workers with designated security responsibilities, but does not work well for the majority who have many other responsibilities besides information security, let alone "cyber security".

Yes, I'm ranting about "cyber", again. Here we have yet another product from the US cyber defense collective that fails to clarify what it actually means by "cyber", unless you class this as definitive:
We are at a fascinating point in the evolution of what we now call cyber defense. Massive data losses, theft of intellectual property, credit card breaches, identity theft, threats to our privacy, denial of service – these have become a way of life for all of us in cyberspace. 
It's about as vague and hand-waving as "cloud". The CIS describes itself in similarly vague terms, again liberally sprinkled with cyber fairy-dust:
The Center for Internet Security, Inc. (CIS) is a 501c3 nonprofit organization whose mission is to identify, develop, validate, promote, and sustain best practices in cyber security; deliver world-class cyber security solutions to prevent and rapidly respond to cyber incidents; and build and lead communities to enable an environment of trust in cyberspace.
Anyway, back to CSC 17. The control is broken down into 5 parts:
[17.1] Perform gap analysis to see which skills employees need to implement the other Controls, and which behaviors employees are not adhering to, using this information to build a baseline training and awareness roadmap for all employees.
Hmmm, OK, a gap analysis is one way to identify weak or missing skills (and knowledge and competencies) that would benefit from additional training, but I'm not clear what they mean in reference to behaviors that employees are 'not adhering to'. In what sense do we 'adhere to' behaviors? I guess that might mean habits??
[17.2] Deliver training to fill the skills gap. If possible, use more senior staff to deliver the training. A second option is to have outside teachers provide training onsite so the examples used will be directly relevant. If you have small numbers of people to train, use training conferences or online training to fill the gaps. 
This part is explicitly about training for skills. There is no explanation for recommending 'more senior staff' or 'outside teachers' providing 'training onsite': there are of course many different ways to train people, many different forms of training. I see no reason to be so specific: surely what is best depends on the context, the trainees, the subjects, the costs and other factors? 

I'm hinting at what I feel is a significant issue with the entire CIS approach: it is prescriptive with little recognition or accounting for the huge variety of organizations and situations out there. Information risks differ markedly between industries and in different sizes or types of organizations, while their ability and appetite to address the risks also vary. A one-size-fits-all approach is very unlikely to suit them all ... which means the advice needs to be tempered and adapted ... which begs questions about who would do that, and how/on what basis. [I'll hold my hand up here. I much prefer the ISO27k approach which supplements its lists of controls with advice on identifying and analyzing information risks, explicitly introducing a strong business imperative for the security.] 
[17.3] Implement a security awareness program that (1) focuses on the methods commonly used in intrusions that can be blocked through individual action, (2) is delivered in short online modules convenient for employees (3) is updated frequently (at least annually) to represent the latest attack techniques, (4) is mandated for completion by all employees at least annually, (5) is reliably monitored for employee completion, and 6) includes the senior leadership team’s personal messaging, involvement in training, and accountability through performance metrics.
Oh dear. I would quarrel with every one of those six points:
  1. Why would you 'focus on the methods commonly used in intrusions' (specifically) rather than, say, protecting intellectual property, or spotting and correcting mistakes? We know of well over 50 topics within information risk and security that benefit from heightened awareness. This point betrays the prejudices of CIS and the authors of the document: they are myopically obsessed with Internet hackers, neglecting the myriad other threats and kinds of incident causing problems.

  2. Why 'short online modules'? It is implied that convenience rules, whereas effectiveness is at least as important. 'Online modules' only suit some of the workers who use IT systems, and we all know just how useless some online training and awareness programs can be in practice. If all you want is to be able to tick the box on some compliance checklist, then fine: force workers to click next next next while skimming as rapidly as possible through some mind-numbingly dull and boring, not to say cheap-and-nasty cartoon-style or bullet-point drivel, and answer some banale question to "prove" that they have completed the "training", and Bob's yer uncle! If you actually want them to learn anything, to think differently and most of all to change the way they behave, you are sadly deluded if 'short online modules' are your entire approach. Would you teach someone to drive using 'short online modules'? Can we replace the entire educational system with 'short online modules'? Of course not, don't be daft. 

  3. I agree that a security training and awareness program needs to be 'updated frequently', or more accurately I would say that it needs to reflect current and emerging information risks, plus the ever-changing business environment, plus recent incidents, plus business priorities, plus learning from past issues, incidents and near-misses (including those experienced by comparable organizations). Updating those 'short online modules' on 'the methods commonly used in intrusions' and 'the latest attack techniques' misses the point, however, if it all comes down to a cursory review and a bit of tittivation - worse still if the materials are only updated annually. The mere suggestion that annual updates might be sufficient is misleading in the extreme, bordering on negligent: things are moving fast in this domain, hence the security awareness and training program needs to be much more responsive and timely to be effective. [Again, if all you want is that compliance tick, then fine, suit yourself. Ignore the business benefits a culture of security would bring you. Do the least amount possible and pretend that's enough - like Sony and Target might have done ...] 

  4. 'Mandated for completion' harks back to the bad old days when we were all forced to attend some tedious annual lecture on health and safety, dental hygiene or whatever. We know that is a badly broken model, so why push it? Modern approaches to education, training and awareness are much more inclusive and responsive to student needs. The process caters for differing styles and preferences, uses a range of materials and techniques, and most of all seeks to hook people with content that is useful, interesting and engaging, so there should be no need to force anyone through the sausage machine. Wake up CIS! The world has moved on! How about the kcrazy notion, for instance, of rewarding people for being aware, demonstrating their understanding by doing whatever it is you want them to do? If your awareness and training materials are not sufficiently creative and engaging to drive demand, if your people need to be dragged kicking and screaming into the process then you might as well break out the cat-o-nine-tails. "The floggings will continue until morale improves"!

  5. Mere 'completion' of those 'short online modules' is trivial to determine as I mentioned above: simply count the clicks and (for bonus marks) set an arbitrary passmark on that final 'assessment' - albeit allowing students to try as many times as they can be bothered to keep on guessing, just to escape the tedium and get back to work. Do you honestly think that has any value whatsoever, other than (once again) ticking the compliance box like a good little boy? The same can be said for attendance at awareness sessions, courses, events or whatever. It's easy to count page views on the intranet Security Zone, for instance, and from there to claim that x% of employees have participated, but how many of they have taken the slightest bit of interest or actually changed their behaviors in any meaningful and positive way? You won't find that out by measuring 'completion' of anything. In short, 'completion' metrics are not PRAGMATIC.

  6. Part 6 is a vague mish-mash of concepts, depending on how one interprets the weasel-words. 'Personal messaging' from the 'senior leadership team' is all too often an excuse for a few (as few as possible!) carefully-chosen words on those dreadfully trite corporate motivational posters: "Make It So" says the boss. "Do it ... or else!" Likewise, 'getting involved in training' might be restated as "Turn up at the odd event" or "Make a guest appearance, say a few words, press-the-flesh". What's completely missing from the CIS advice is the revolutionary idea that managers - at all levels from top to toe - should actively participate in the security awareness and training program as students, not just whip-crackers and budget-approvers. Managers need to be well aware of information risks, security, compliance, governance, control And All That, just as staff need to know how to avoid becoming cyber-victims. How do you expect managers to know and care about that stuff if they are not participating in the security awareness program? What kinds of strategies are they likely to support if they lack much of a clue? [Hint: "Implement the 20 controls" has a certain ring to it.]

    The final clause about 'accountability through performance metrics' once again needs careful interpretation. Along with responsibility, duty, obligation and so on, accountability is a crucially important concept in this field, yet one that is more often misinterpreted than correctly understood. We like to sum it up in the hackneyed phrase "The buck stops here" which works in two ways: first, we are all personally accountable for our actions and inactions, our decisions and indecisions, our good and bad choices in life. The person who clicks the phishing link and submits their password (the very same crappy password they use on all the places they can get away with it) leading to a major incident can and should be held to account for that obvious lapse of judgment or carelessness. At the same time, the person's managers, colleagues, support network and - yes - their security awareness and training people all share part of the blame because information security is a team game. I would also single out for special attention those who put the person in the situation in the first place. There are almost always several immediate issues and a few root causes behind security incidents: teasing out and addressing those root causes is the second angle to stopping-the-buck. Why did the person ignore or misread the signs? Why didn't the systems identify and block the phishing attack? Why wasn't this kind of incident foreseen and avoided or mitigated? ... leading ultimately to "What are we going to do about this?" and sometimes "Who will swing for it?"! Performance metrics are of tangential relevance in the sense that we are accountable for meeting defined and measurable performance targets, but holding people to account for information security is much more involved than counting how many widgets they have processed today. Performance is arguably the most difficult aspect to measure in information security, or cyber-security for that matter. It's all very well to measure the number and consequences of incidents experienced, but how many others were avoided or mitigated?
OK, moving along, let's take a squint at the remaining parts of CSC 17:
[17.4] Validate and improve awareness levels through periodic tests to see whether employees will click on a link from suspicious email or provide sensitive information on the telephone without following appropriate procedures for authenticating a caller; targeted training should be provided to those who fall victim to the exercise. 
Of the 50+ topics in information security awareness and training, why pick on email and phone phishing, specifically? Is nothing else important? I appreciate that phishing is a current concern but so too are ransomware, privacy, human errors, industrial or national espionage, piracy and many many others, ALL of which benefit from targeted awareness and training. What's more, the situation is dynamic and differs between organizations, hence it is distinctly misleading to pick out any one topic for special attention unless it is phrased merely as an example (it wasn't). Oh dear. It gets even worse at the end with the suggestion that 'targeted training' should be administered to victims: is that punishment? It sounds like punishment to me. We're back to flogging again. How about, instead, rewarding those who did not fall for the exercise, the ones who spotted, resisted and reported the mock attack? Hey, imagine that!
[17.5] Use security skills assessments for each of the mission critical roles to identify skills gaps. Use hands-on, real world examples to measure mastery. If you do not have such assessments, use one of the available online competitions that simulate real-world scenarios for each of the identified jobs in order to measure mastery of skills mastery.
'One of the online competitions'?? Well I suppose that is an approach, but somehow I doubt its effectiveness - and (for good measure) it definitely raises security concerns. Instead of being tacked on the bottom like the donkey's tail, 17.5 should probably have been included in 17.1 since it relates back to the identification of 'gaps'. As to measuring 'masterery of skills mastery', let's assume that is just a typo, a human error, one of those 50+ other things that best practice broad-spectrum information security awareness and training programs cover besides phishing or cyber-wotsits. 

Bottom line: sorry, CIS, but if control #17 is representative of the remaining 19, I'm disappointed. There are too many flaws, errors and omissions, and it is very biased towards IT and hacking. It is prescriptive and too far from good- let alone best-practice to recommend. 


PS  Remember these distinctly cynical comments the next time you read or hear someone extolling the virtues of the CIS 20 critical controls. If they think the CIS advice is wonderful, what does that tell you about their standards and expectations? 

PPS  And if you disagree with me, the floor's yours. I'm happy to discuss. Put me right if you will.

Sep 14, 2016

Resilience good, assured resilience better, proven and optimized resilience best

One of several excellent heads-ups in the latest issue of RISKS concerns an IEEE report on Facebook's live testing of their data center resilience arrangments.

Facebook's SWAT team, business continuity pro's, tech crew and management all deserve congratulating on not just wanting to be resilient, but making it so, proving that it works, and systematically improving it so that it works well.

However, I am dismayed that such an approach is still considered high-risk and extraordinary enough to merit both an eye-catching piece in the IEEE journal and a mention in RISKS. Almost all organizations (ours included*) should be sufficiently resilient to cope with events, incidents and disasters - the whole spectrum, not just the easy stuff.  If nobody is willing to conduct failover and recovery testing in prime time, they are admitting that they are not convinced the arrangements will work properly - in other words, they lack assurance and consequently face high risks.

About a decade ago, I remember leading management in a mainstream bank through the same journey. We had an under-resourced business continuity function, plus some disaster recovery arrangements, and some of our IT and comms systems were allegedly resilient, but every time I said "OK so let's prove it: let's run a test" I was firmly rebuffed by a nervous management. It took several months, consistent pressure, heavyweight support from clued-up executive managers and a huge amount of work to get past a number of aborted exercises to the point that we were able to conduct a specific disaster simulation under strictly controlled conditions, in the dead of night over a bank holiday weekend. The simulation threw up issues, as expected, but on the whole it was a success. The issues were resolved, the processes and systems improved, and assurance increased. 

At that point, management was fairly confident that the bank would survive ... a specific incident in the dead of night over a bank holiday weekend, provided the incident happened in a certain way and the team was around to pick up the pieces and plaster over the cracks. I would rate the confidence level at about 30%, some way short of the 50% "just about good enough" point, and well shy of the 100% "we KNOW it works, and we've proven it repeatedly under a broad range of scenarios with no issues" ultimate goal ...

My strategy presentations to management envisaged us being in the position that someone (such as the CEO, an auditor or a careless janitor) could wander into a computer suite on an average Wednesday morning and casually 'hit the big red button', COMPLETELY CONFIDENT that the bank would not instantly drop offline or turn up in the news headlines the next day. Building that level of confidence meant two things: (1) systematically reducing the risks to the point that the residual risks were tolerable to the business; and (2) increasing assurance that the risks were being managed effectively. The strategy laid out a structured series of activities leading up to that point - stages or steps on the way towards the 100% utopia.

The sting in the tail was that the bank operated in a known high-risk earthquake zone, so a substantial physical disaster that could devastate the very cities where the main IT facilities are located is a realistic and entirely credible scenario, not the demented ramblings of a paranoid CISO. Indeed, not long after I left the bank, a small earthquake occurred, a brand new IT building was damaged and parts of the business were disrupted, causing significant additional costs well in excess of the investment I had proposed to make the new building more earthquake-proof through the use of base isolators. Management chose to accept the risk rather than invest in resilience, and are accountable for that decision. 


* Ours may only be a tiny business but we have business-critical activities, systems, people and so forth, and we face events, incidents and disasters just like any other organization. Today we are testing and proving by using our standby power capability: it's not fancy - essentially just a portable generator, some extension cables and two UPSs supporting the essential office technology (PCs, network links, phones, coffee machine ...), plus a secondary Internet connection - but the very fact that I am composing and publishing this piece proves that it works since the power company has taken us offline to replace a power pole nearby. This is very much a live exercise in business continuity through resilience, concerning a specific category of incident. And yes there are risks (e.g. the brand new UPSs might fail), but the alternative of not testing and proving the resilience arrangements is riskier. 

That said, we still need to review our earthquake kit, check our health insurance, test our cybersecurity arrangements and so forth, and surprise surprise we have a strategy laying out a structured series of activities leading up to the 100% resilience goal. We eat our own dogfood.

UPDATE: today (15th Sept) we had an 'unplanned' power cut for several hours, quite possibly a continuation or consequence of the planned engineering work that caused the outage yesterday. Perhaps the work hadn't been completed on time so the engineers installed a temporary fix to get us going overnight and returned today to complete the work ... or maybe whatever they 'fixed' failed unexpectedly ... or maybe the engineers disturbed something in the course of the work ... or maybe this was simply a coincidence. Whatever. Today the UPSs and generator worked flawlessly again, along with our backup Internet and phone services. Our business continued, with relatively little stress thanks to yesterday's activities.

Sep 12, 2016

Security metrics for business or business metrics?

At first glance, "How To Talk About Security With Every C-Suite Member" by Andrew Storms dispenses good advice. The author emphasizes that there's not much point talking tech to the execs.
"Communicating with C-suite leaders about the ongoing security threats your company faces can easily turn into an exercise in futility. Their eyes glaze over as you present metrics and charts that illustrate the current state of the business’s IT infrastructure, and your attempts to justify investments in additional security tools and systems end up being unsuccessful."
Mmm, well, if you are indeed trying to justify investments in [IT] security tools and [IT] systems using [IT] metrics and charts concerning the IT infrastructure, then yes you are patently focused on IT.  Or, as Mr Storms put it, you are "failing to contextualize your data into terms that resonate with leaders who work outside of IT."

"When speaking with leaders from across the business, it’s important to remember the common goal you share: enablement. In your case, by assessing the risks your company faces, balancing them with the potential costs of a breach, and making security investments accordingly, you’re enabling every department to function and thrive on a day-to-day basis. You need to make it clear to your audience—in terms they can relate to—how your team is directly contributing to this universal goal. Rather than presenting industry-standard metrics without further explanation, contextualize your findings by showing their net value."
I welcome the business enablement angle even more than the [information] risk part but there's more to this than investing in controls to prevent 'breaches', and that final sentence jars with me. 'Rather than presenting industry-standard metrics' is a curious turn of phrase: why would anyone be presenting 'industry-standard metrics', and if so what are they? What does that even mean? It's a false dichotomy.

It gets worse ...
"Explain exactly why you’ve chosen to present this metric, and describe exactly how addressing hosts with a 5-or-higher CVSS score directly enables the whole company."
To put that another way, "Say why your geeky tech metric is on the table and how wonderfully it shines". The implication is that the execs are not clever enough to understand IT security metrics, so dumb them down (speak slowly and loudly, wave arms wildly!).

The possibility of the execs having driven the selection of information security metrics to suit business objectives in the first place doesn't seem to have occurred to the author.

I would turn this whole thing on its head. Instead of 'talking about security', the discussion should instead be about the business, or rather what concerns the execs in relation to achieving the organization's business and other objectives. Instead of focusing rather negatively on [information] risks, how about turning the discussion towards something much more positive such as the business opportunities opened up by secure access to high quality information?

The point is that investing in security is not a goal in itself but a means to an end. If the end is obvious, and it is clear how information security supports or enables reaching it, investing or not investing is no longer a major issue. It's not exactly a forgone conclusion, however, because there may be other even more valuable opportunities and various constraints. It's a strategic issue, exactly the kind of thing that execs enjoy. With this in mind, the particular metrics are incidental, almost irrelevant to a much bigger and more significant business discussion.

Gary (Gary@isect.com)

Aug 31, 2016

Hot off the press!

The NoticeBored security awareness topic for September is communications security.

It is just as important to protect information while it is being communicated as when it is stored and processed, and yet communications mechanisms are numerous, widespread, complex, dynamic and hence tricky to control. 

Communications security is a substantial challenge for every organization, even the very best.

We have covered various aspects of communications from different angles many times before in the awareness program, mostly emphasizing ICT (information and communicaitions technologies) but also the human aspects such as social engineering and fraud. This time around we supplement the usual fare with something new: body language.

Aside from the actual words we use in conversation or in writing, the way we express stuff is often just as revealing - in fact in information security terms, body language qualifies as a communications side-channel. 

The TV is awash with examples, such as the US presidential candidates currently making numerous appearances. Provided they stick to the script, the politicians' carefully-prepared and well-rehearsed speeches are intended, of course, to follow specific lines and communicate largely pre-determined messages. In practice, their gestures, facial expressions, nods and shakes of the head, smiles and grimaces, demeanor, even the          dramatic        pauses       supplement and frame what they are saying, affecting the way they are understood by the audience and (for that matter) the journalists and news media. The specific choice of words, the phrasing and intonation, even the speaker's volume and cadence, also influence the communication. In addition there's the broader context including factors such as the lead-up, time of day, location, props, formality, clothing, audience reactions and participation, and more.

With all that in mind, it's obvious that the words alone don't paint the whole picture, hence controlling the communications involves much more than simply writing the script.  Most politicians, presenters, celebrities and performers are presumably coached in how to communicate well, or at least they are experienced and well-practiced at it. They don't all have the same abilities, however, and lapses of concentration or emotional outbursts can trip anyone up. If you are observant, there are other more subtle cues, many of which the speaker is unaware of (gently shaking the head in disagreement while saying "yes" is a classic and surprisingly common example). Controlling our subconscious, reflexive or innate behaviors is hard, especially under the full glare of the global media presence.

Translating over into the corporate context, there are information security implications for situations such as business meetings, phone calls, video-conferences, negotiations, sales pitches, seminars and presentations - including, for that matter, security awareness and training events. Whenever we converse or interact with other people, there are bound to be both intended and unintended communications. Being aware of this is the first step on the way to taking charge and controlling - or securing - the comms. It's also an important part of responding to the audience since communications are almost invariably bidirectional.

On that note, please comment on this item or email me with your thoughts. I'd love to hear back from you. 

Hello! Is there anyone out there?  Tap once for yes, twice for no.

Gary (Gary@isect.com)

PS  I guess that's two taps then ...