Welcome to NBlog, the NoticeBored blog

The blogging will continue until morale improves

Aug 21, 2019

NBlog Aug 21 - Please be inform that our service to you will be terminated in a shortly time


Friends, Romans, customers, lend me your screens. I come to bury NoticeBored, not to praise it.

Sadly, the time has come to draw a lengthy chapter in our lives to a close.

The monthly NoticeBored security awareness and training subscription service will cease to be early next year. As of April 2020, NoticeBored will be no more. It will be pushing up the daisies. We'll be nailing it to the perch and sending it off to the choir invisible.


The final straw and inspiration for the title of this piece was yet another exasperating phisher:




... and the realisation that suckers will inevitably fall for scams as ridiculous as that, no matter what we do. There will always be victims in this world. Some people are simply beyond help ... and so too, it seems, are organizations that evidently don't understand how much they need security awareness and training. "It's OK, we have technology" they say, or "Our IT people run a seminar once a year!" and sure enough the results are plain for all to see. Don't say we didn't warn them.

We tried, believe me we tried to establish a viable market for top-quality professionally written creative awareness and training content. Along the way we've had the pleasure of helping our fabulous customers deliver world-class programs with minimal cost and effort. But in the end we were exhausted by the overwhelming apathy of the majority. 

As we begin the research for our 200th security awareness module, it's time to move on, refocusing our resources and energies on more productive areas - consulting and auditing on information risk and security, ISO27k, security metrics and suchlike.

We're determined that the gigabytes of creative security awareness and training content we've created since 2003 will not end up on some virtual landfill so we'll continue to offer and occasionally update the security policies and other materials through SecAware.com. The regular monthly updates will have to go though as there simply aren't enough hours in the day. "She cannae take it, Cap'n!"

Meanwhile these bloggings will continue. We're still just as passionate as ever about this stuff (including the value of security awareness, despite everything). We've got articles, books and courses to write and deliver, standards to contribute to, global user communities to support, proposals to prepare. 

Must go, things to do.

Aug 20, 2019

NBlog Aug 20 - cyber-insurance standard published


We are delighted to announce the birth of another ISO27k standard

ISO/IEC 27102:2019 — Information security management —

Guidelines for cyber-insurance

The newest, shiniest member of the ISO27k family nearly didn't make it into this world. Some in the insurance industry are concerned about this standard muscling-in on their territory. Apparently, no other ISO/IEC standards seek to define categories of insurance, especially one as volatile as this. Despite some pressure not to publish, this standard flew through the drafting process in record time thanks mostly to starting with an excellent ‘donor’ document and a project team tightly focused on producing a standard to support and guide this emerging business market. Well done I say! Blaze that trail! This is what standards are all about.

‘Cyber’ is not yet a clearly-, formally- and explicitly-defined prefix, despite being bandied about willy-nilly, a solid-gold buzzword. It is scattered like confetti throughout but unfortunately not defined in this standard, although some cyber-prefixed conventional common-or-garden information risk and security terms are defined by reference to “cyberspace” which is - of course - the “interconnected digital environment of networks, services, systems, and processes”. Ah, OK then. Got yer.

We each have our own interpretations and understandings of the meaning of cyber, some of which differ markedly. The information risks associated with cyberwarfare and critical national and international infrastructures (such as the Internet), for example, are much more substantial than those associated with the activities of hackers, VXers and script kiddies generally. Even a ‘massive’ privacy breach or ransomware incident is trivial compared to, say, all-out global cyberwar. The range is huge ... and yet people (including ISO/IEC JTC1/SC27) are using 'cyber' without clarifying which part or parts of the range they mean. Worse still, some (even within the profession) evidently don’t appreciate that there are materially different uses of the same term. It’s a recipe for confusion and misunderstanding.

The standard concerns what I would call everyday [cyber] incidents, not the kinds of incident we can expect to see in a cyberwar or state-sponsored full-on balls-out all-knobs-to-eleven cyber attack. I believe [some? most? all?] policies explicitly exclude cyberwarfare ... but defining that may be tricky for all concerned! No doubt the loss adjusters and lawyers will be heavily involved, especially in major claims. At the same time, the insurance industry as a whole is well aware that its business model depends on its integrity and credibility, as well as its ability to pay out on rare but severe events: if clients are dubious about being compensated for losses, why would they pay for insurance? Hopefully this standard provides the basis for mutual understanding and a full and frank discussion between cyber-insurers and their clients leading to contracts (confusingly termed “policies”!) that meet everyone’s needs and expectations.

There are legal and regulatory aspects to this too e.g. compensation for ransomware payments may be legally prohibited in some countries. Competent professional advice is highly recommended, if not essential.

Depending on how the term is (a) defined and (b) interpreted, ‘cyber incidents’ covers a subset of information security incidents. Incidents such as frauds, intellectual property theft and business interruption can also be covered by various types of insurance, and some such as loss of critical people may or may not be insurable. Whether these are included or excluded from cyber-insurance is uncertain and would again depend on the policy wording and interpretation. 

Likewise the standard offers sage advice on the categories or types of costs that may or may not be covered, depending on the policy wording. I heartily recommend breaking out the magnifying glasses and poring over the small-print carefully. Do it during the negotiation and agreement phase prior to signing on the dotted line, or argue it out later in court - your choice.

Personally, I’d like to see the business case for using cyber-insurance as a risk treatment option expanded further (beyond what the standard already covers), laying out the pros and cons, the costs and benefits of so doing, in business terms. It is a classic example of the risk treatment now known as ‘sharing’, formerly ‘transferral’. Maybe I will write a paper on that very topic. Watch this space.

Aug 19, 2019

Vote for your favorite security blogs

Purely by chance, I discovered today that this blog has been nominated in the "Most entertaining security blog" 2019 category at Security Boulevard.

What a nice surprise! 

Regardless of the eventual outcome of the voting, it's humbling to make it onto the nominations list alongside several excellent blogs that I enjoy reading. Please visit the voting page to see what I mean, browse the nominated blogs and vote for your favorites [you can suggest blogs in addition to those nominated].

Meanwhile, the bloggings will continue ...



PS  If you're on the lookout for infosec blogs worthy of your attention, take a look at this excellent shortlist from VPNmentor.

NBlog Aug 19 - extending the CIS security controls

The Center for Internet Security has long provided helpful free advice on information (or cyber) security, including a "prioritized list of 20 best practice security controls" addressing commonplace risks.

In the 'organizational controls' group, best practice control 17 recommends "Implement a security awareness and training program". Sounds good, especially when we read what CIS actually means by that:
"It is tempting to think of cyber defense primarily as a technical challenge, but the actions of people also play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of system design, implementation, operation, use, and oversight. Examples include: system developers and programmers (who may not understand the opportunity to resolve root cause vulnerabilities early in the system life cycle); IT operations professionals (who may not recognize the security implications of IT artifacts and logs); end users (who may be susceptible to social engineering schemes such as phishing); security analysts (who struggle to keep up with an explosion of new information); and executives and system owners (who struggle to quantify the role that cybersecurity plays in overall operational/mission risk, and have no reasonable way to make relevant investment decisions)."
Recognising that security awareness and training programs should not merely address "end users" (meaning staff or workers in general who use IT) is one of the things that differentiates primitive from basic approaches, extending the program to a broader-based organizational cultural development approach in fact. Well done CIS for pointing that out, although personally I would have offered more explicit guidance rather than emphasizing a "skills gap analysis". For example, having distinguished several audiences, I suggest preparing awareness and training materials on subjects and in formats that suit their respective perspectives and needs. Also, make the awareness and training activities ongoing, close to continuous rather than infrequent or occasional. Those two suggestions, taken together, lift basic security awareness and training programs to the next level - good practice at least, if not best practice.

Anyway, that's just 1 of 20. Similar considerations apply to the other 19 controls: no doubt they can all be embellished and refined or amplified upon by subject matter experts ... which hints at a 21st control: "Actively seek out and consider the advice of experts, ideally experts familiar with your situation" implying the use of consultants or, better still, employing your own information security specialists full-time or part-time as appropriate. 

While I'm at it, I'd like to suggest four further controls that are not immediately obvious among the present 20, all relating to management:
22. Information risk management - comprising a suite of activities, strategies, policies, skills, metrics etc. to identify, evaluate and address risks to information systematically and professionally;
23. Management system - a governance arrangement that envelops all aspects of information risk and security management under a coherent structure, ideally covering information risk, information security, governance, compliance, incident management, business continuity and more (e.g. health and safety, since "Our people are our greatest assets"!). Although I'm thinking of ISO27k here, there are in fact several such frameworks. Depending on the organizational or business context, any one of them might be perfect, or it may be better to draw on elements from several in order to assemble a custom arrangement with the help of those experts I mentioned a moment ago;
24. Information risk and security metrics - by focusing attention on and measuring key factors, metrics enable rational management, facilitate continuous improvement and help align information risk and security with business objectives. The advice might usefully expand on how to identify those key factors and how best to measure them, perhaps in the form of a 'measurement system';
25.  Information risk and security management strategy - I find it remarkable that strategy features so rarely in this field, given its relevance and importance to the organization. I guess this blind-spot stems partly from weaknesses in other areas, such as awareness, management systems and metrics: if management doesn't really understand this stuff, and lacks the tools to take charge and demonstrate leadership, it's left to flounder about on its own with predictable results.  If information risk and security managers, CISOs etc. aren't competent or aware of the value of strategy, maybe it never occurs to them to get into this, especially as standards such as ISO/IEC 27001 barely even hint at it, if at all. 
Maybe I should suggest these 5 additional controls to CIS? Their website doesn't exactly call out for suggestions so you, dear blog reader, are in the privileged position of advance notice. Take as long as you like to think this over and by all means comment below, email me or prompt CIS to get in touch. Let's talk!


PS  Seems I'm not alone in recommending the strategic route. I just spotted this in Ernst & Young's Global Information Security Survey 2018-19:
"More than half of the organizations don’t make the protection of the organization an integral part of their strategy and execution plans ... Cybersecurity needs to be in the DNA of the organization; start by making it an integral part of the business strategy ... Strategic oversight is on the rise. The executive management in 7 of 10 organizations has a comprehensive understanding of cybersecurity or has taken measures to make improvements. This is a huge step forward; put cybersecurity at the heart of corporate strategy ... Cybersecurity must be an ongoing agenda item for all executive and non-executive boards. Look to find ways to encourage the board to be more actively involved in cybersecurity."
Whether information risk and security is an integral part of business strategy, or business strategy is an integral part of information risk and security, is a moot point. Either way, they should be closely aligned, each driving and supporting the other. Strong information risk and security is both a business imperative and a business enabler. 

As to putting this on the board's agenda, we've been doing precisely that since, oooh, let me see, 2003 ...



Aug 18, 2019

NBlog Aug 18 - about information assets ... and liabilities


Information security revolves around reducing unacceptable risks to information, in particular significant or serious risks which generally involve especially valuable, sensitive, critical, vital or irreplaceable information. Those are the ‘information assets’ most worth identifying, risk-assessing and securing. 

That seems straightforward but it is more complicated than it sounds for many reasons e.g.:
  • Information exists in many forms, often simultaneously e.g. computer data and metadata (information about information), knowledge, paperwork, hardware designs, molds, recipes, concepts and ideas, strategies, policies, understandings and agreements, experience and expertise, working practices, contacts, software, data structures, intellectual property (whether legally registered and protected or not) … any of which may need to be secured;
  • Information is generally dynamic, hence there is a timeliness aspect to its value (e.g. breaking vs old news, forthcoming vs published company accounts);
  • Information is usually context-dependent – its meaning and value arise partly from relationships to other supporting or related information (e.g. ’42’ may mean many things, even the product of six times nine);
  • Information is often diffuse and hard to identify, evaluate, contain/pin-down and secure – it’s cloudy and “it wants to be free”;
  • Information that is too tightly secured loses its value, since its value comes from its legitimate exploitation or use, timeliness, expression and communication/sharing i.e. its availability;
  • Some information has negative value (e.g. fake news, subterfuge, malware), which makes integrity important – and that’s another complex concept;
  • Severe threats, vulnerabilities or impacts increase the probability or impact of serious incidents, even if the information itself does not seem particularly special (e.g. a faulty 10 cent rivet can bring down a plane);
  • Some information risks are significant because of impacts primarily to third parties if the information is compromised. This includes valuable information belonging to third parties and entrusted to the organization (such as personal information and proprietary information/trade secrets/intellectual property) and various incidents with environmental or societal impacts (e.g. intelligence info about weapons capabilities). If incidents occur, there may be secondary impacts to the organization (such as noncompliance penalties and breakdowns in business relationships or brands) which can be hard to value (partly it depends on the third parties’ and other external reactions to incidents, partly on the accountability aspect).
There’s a lot there to take into account, and that’s not even an exhaustive list! In practice, though, there are some obvious shortcuts (e.g. a hospital is bound to need to address risks involving its health and business information, and “good practice” controls are applicable to most organizations) and the Keep It Simple, Stupid approach makes an excellent starting point – way better than putting all available resources into risk identification and analysis, leaving too little for risk treatment and management.

Aug 16, 2019

NBlog Aug 16 - the brilliance of control objectives


Way back in the 1990's, BS7799 introduced to the world a brilliant yet deceptively simple concept, the "control objectives".  

Control objectives are short, generic statements of the essential purpose or goal of various information security controls. At a high level, information security controls are intended to 'secure information' but what does that actually mean? The control objectives explain.

Here's an example:

Section 7. System access control
7.1 Business requirement for system access
Objective: To control access to business information.
Access to computer services and data should be controlled on the basis of business requirements.
This should take account of policies for information dissemination and entitlement.

At first glance, this control objective is self-evident in that the objective of an access control is obviously to control access but look again: the objective explicitly refers to 'business information' and the following notes emphasize business requirements and policies in this area. In other words, this security control has a business purpose. The reason for controlling access to IT systems is to secure business information for business reasons.

The standard didn't elaborate much on those business reasons, partly because they vary markedly between organizations. A bank, for instance, has different information facing different information risks than, say, a mining company or government department. They all have valuable information facing risks that need to be addressed, and system access control is likely to be applicable to each of them, but in different ways. There are subtleties here that the standard deftly sidestepped, leaving it to intelligent readers to interpret the standard according to their circumstances.

The standard went on to describe controls that would satisfy the objective, forming a strong link between the security measures employed and the business reasons for doing so. I've always treated the controls themselves as examples that illustrate possible approaches, reminders or hints of the kinds of things that might be useful to satisfy the control objectives. There are loads of different ways to secure access to IT systems, and as an experienced infosec pro I don't need a standard to list them all out for me in great detail, especially as those details depend on the situation and the business context (although the Germans have made a valiant attempt to do that!). Furthermore, there is a near-infinite set of possible controls if you consider all the combinations and permutations, parameters and variants, hence it is unrealistic to expect a standard to identify the one best way to do this. There isn't a unique solution to this puzzle.

So instead the succinct control objectives set us thinking about what we're trying to achieve for the business in each of the 30-odd areas covered. Brilliant!

The control objectives in BS 7799:1995 and the BSI/DTI Code of Practice that preceded it were well-written and remain relevant today. Unfortunately, they have been diluted over the years since BS 7799 became ISO/IEC 17799 then ISO/IEC 27002. I am disappointed to learn that the next release of '27002 may drop them altogether, severing a valuable link between business and information security ... but that doesn't mean they are gone altogether. Maybe I'll launch a collaborative project on the ISO27k Forum to elaborate on an updated set of control objectives, or maybe I'll just do it myself in my copious free time [not]. We'll see how it goes.

Aug 11, 2019

NBlog Aug 11 - loop back security


This is a classical step-wise view of the conventional ISO27k approach to managing information risks:
  1. Identify your information risks;
  2. Assess/analyze them and decide how to treat them (avoid, share, mitigate or accept);
  3. Treat them - apply the chosen forms of risk treatment;
  4. Monitor and manage, reviewing and taking account of changes as necessary.
As an example, most organizations have some form of user registration process to set up network computer accounts (login IDs) for workers. The controls outlined in ISO/IEC 27001 Annex A section 9.2.1, described in more detail in ISO/IEC 27002 section 9.2.1, are part of the suggested means of mitigating the risks associated with inappropriate user access to information and information systems, one of the four forms of risk treatment at step 3 in the risk management process.

Ah but what happened to steps 1 and 2? Oh oh.

Working backwards from step 3, management appear to have decided that the A.9.2.1 controls are required in step 2. So how did they reach that decision? Is there any evidence of that decision ever being taken, other than the fact that the controls are now in place? What drove the decision? What were they hoping to achieve? Were the  alternative risk treatment options considered and rejected?

Prior to that, someone presumably identified the information risks in step 1. Great! Let's see them, then. Go ahead, make my day, show me the risks that are addressed by, say, your controls under A.9.2.1. Let's talk them over. What is the organization hoping to achieve with these controls? What would be the predicted business consequences if the controls weren’t in place, didn’t work as planned, or failed for some reason so the risks eventuated? How drastic would such an incident be to the organization: would it be terminal, very costly and disruptive, somewhat costly and disruptive, or merely annoying and of little real consequence? Relative to other information risks, are these risks high, medium or low? Are these risks a major concern for the organization, a clear problem area that has maybe led to a string of nasty incidents or near-misses in the recent past, or are they just theoretical concerns, perhaps things that might conceivably be a problem at some future point?

Posing such questions is not simply a matter of me being an awkward bugger, a stickler for process, trying to prove an hypothesis that you didn't get to where you are by following the route prescribed by ISO27k, instead assuming that “Of course we need access controls! Everyone needs access controls!”. The real reason is to explore your organization's understanding of its information risks, since that implies a level of care over designing, documenting, implementing, operating and managing the controls, relative to all the other controls and risk treatments in scope of the ISMS - and not just the user registration and access controls: there are loads of controls relating to loads of risks. Do you have a good grasp of them, or have you jumped directly to The Answer without understanding the Question? Show me your workings! 

There are implications for step 4 as well. If the A.9.2.1 controls are absolutely vital for sound business reasons relating to the associated risks, clearly management needs to be certain they are strong, which implies a lot of care and assurance, close monitoring and urgent action if they look likely to, or do, fail. If they are necessary, somewhat less care may be sufficient, with some assurance. If they are nice to have, then the amount of effort and assurance may be minimal … saving the resources for other more important matters. [Evidently, since they exist, someone has already decided they are not unnecessary!].

The approach I'm waffling on about here is an illustration of a far more general point about rational, systematic or even scientific management. We often do things 'just because'. We follow convention. We adopt 'good practices' and prefer not to 'buck the trend' or 'stand out' ... but that's not always the best approach, and with a bit of thought we may be able to do things better.

Looping back and forth through any sequential process gives us the opportunity to review and revise our understanding, deepening and extending it on each pass. Identifying and challenging our assumptions can lead to valuable insight.

Of course there is a near infinity of loops to loop and it's neither practical nor advisable to attempt to review everything, implying a process to decide which loops to loop, addressing the why, when, how and who questions. I'll tackle that aspect another time.

Aug 10, 2019

NBlog Aug 10 - the formalities of certification

ISO/IEC JTC1/SC27 is currently getting itself all hot-under-the-collar about cloud security certificates, certifying compliance with standards that were neither intended nor written for certification purposes. 

The ISO27k cloud security standards ISO/IEC 27017 and ISO/IEC 27018 are not written as formally as certifiable standards such as ISO/IEC 27001 ... and yet I gather at least one accredited certification body has been issuing compliance certificates anyway, implying that the auditors must have used their discretion in interpreting the standards and deciding whether the organizations fulfilled the requirements sufficiently well to 'deserve' certificates. The trustworthiness of those certificates, then, depend in part on the competence and judgement of the certification auditors, not just on the precise wording of the standards. In other words, there's an element of subjectivity about it.

The key issue is that, in this context, compliance certification is a formal process designed to ensure that every duly-issued certificate says something meaningful, trustworthy and hence valuable about the certified organization's status - specifically, that it has been independently and competently verified that they fulfill all the mandatory requirements of the respective standard. Certification is meant to be an objective test.

That's why certifiable standards such as '27001 are so precisely worded and narrowly interpreted, for example distinguishing "shall" (= mandatory) from "should" (= discretionary) despite those being different tenses of the exact same English verb. Standards that are not intended to be used for certification processes are not so precisely and narrowly worded, allowing more discretion on how they are applied. To avoid confusion (!), they are not supposed to use the word "shall", and there are drafting rules about similar words e.g. "must", "may" and "can" laid down in the formal ISO Directives

The idea, obviously enough, is to leave as little room for subjective interpretation as possible in the certifiable standards, a tricky objective in a field as diverse and dynamic as information risk and security management, especially given the huge variety of applicable organizations. The context is markedly different to, say, the specification of nuts and bolts.

ISO 9000 was, I think, the first certifiable ISO standard to address this issue: rather than attempt to specify and certify an organization's product quality practices directly, the standard formally specifies an overarching "quality management system", which in turn should ensure that the products are of an appropriate quality. The "certifiable management system" approach has since spread to information security, environmental protection and so on.

It is more of an issue for the accreditation and certification bodies or ISO than for SC27 but, hey, SC27 has plenty of passionately-held opinions and, to be fair, it is an integrity issue. I suspect there will be a crack-down on non-management system certifications, or at least a rewording to distance them from the management systems compliance certificates. That in turn will increase pressure to develop certifiable [management system] standards for cloud security and other domains (such as IoT security) where there is clearly market demand.

Aug 8, 2019

NBlog Aug 8 - loopy intros


Normally in an awareness seminar or training course, we display a static title slide on the screen as people wander into the room, sipping coffee and chatting among themselves then settling down for the show. The title slide tells them they are in the right place at the right time but it's a boring notice - and the audience soon gets NoticeBored.

So, how about instead showing something more interesting to catch their eyes (and ears?) as they arrive?

It's not too hard to set up a looping mini-presentation by following these instructions. Essentially, you add the loopy slides to the start of your conventional slide deck, set them to automatically advance every few seconds and 'repeat until escape'. The 'escape' can be achieved by adding an action button to the loopy slides, that when clicked launches the main part of the presentation.

An alternative approach is to separate the loopy from main presentations. Run the loopy presentation as people arrive. When everyone is settled down, terminate it and launch the main presentation instead. 

Although this involves a couple more clicks for the changeover, it has some advantages:

  • The main presentation is totally unchanged. You can add a loopy intro to any slide deck without changing those slide decks at all. The slide numbering is unchanged. You can still print the speaker-notes pages as handouts without worrying about or wasting paper on those loopy slides.
  • The loopy intro can be something generic, ideally eye-catching and perhaps amusing, perhaps customized to show the title of the main presentation (or if you can't even be bothered to do that, simply write the title on a sign!).
  • You might like to play some background muzak quietly as people arrive, partly to let people know that the show is on, partly to help things settle. Video clips are generally better with sound too.
  • The loopy intro can be re-used across numerous presentations, becoming part of your awareness and training program's branding. Before long, the audience will learn to recognize the style and content of the intro, as well as the main presentation ... hence it's worth investing a little of your valuable resources into preparing something appropriate and impressive. Please make it professional: remember you have an adult audience not a bunch of pre-schoolers. Minions are probably not the best role models.
  • The loopy intro can be updated over time - for example, you might use it to promote your upcoming awareness and training activities, planned sessions, topics, current issues, new stuff, team members, policy snippets, major incidents, news headlines or whatever. Proudly display your best security metrics. Display embarrassing photos from past infosec events, or physical security incidents. Get creative! This is also part of your branding, and fits very nicely with the ongoing/rolling approach to security awareness and training that we heartily recommend.
Maybe we should prepare a generic loopy intro for the InfoSec 101 module, something customers can adapt to their needs? We're planning to update that module early next year so we have time to get our thinking caps on and try out the idea in the NoticeBored slide decks between now and then.

Aug 3, 2019

NBlog Aug 3 - business objectives


A question on the ISO27k Forum today from someone convinced that a network diagram is what he needs to scope his Information Security Management System, coincided with some strategy and metrics work I'm doing for a client. In both cases, it helps to elaborate on and clarify the business reasons why it’s important for the organization to both protect and exploit information. What is it that makes information so valuable that it is worth protecting? 

In short, what's the point?

Understanding and elaborating on those business objectives is very useful for several reasons:
  • They form a direct link between ‘the business’ through ‘valuable information’ to ‘information risk’ to ‘information security’ (and other risk treatments since information security controls are not the whole deal).  Hence we information security pros are not just promoting what we claim to be good security practices for the sake of it, but pushing certain things because we care about helping the organization achieve its goals, and believe those things are worthwhile for the business. In my experience, the business-first approach makes it harder for anyone to push back. It knocks the wind out of their sails if the ISMS is clearly aligned with corporate strategies. Resistance is futile.
  • Details such as what kinds of information are essential/most valuable, and why, are useful when it comes to identifying and evaluating the associated information risks. It’s good to know what kinds of business impact are of most concern, what information assets are therefore most in need of protection … and also, implicitly at least, what information can safely remain at risk. This helps prioritize the risk management and focus on the Stuff That Really Matters (to the business, not just to us in). It’s also handy to know what degree of confidentiality, integrity and availability are needed, and which of those aspects are the most important. This all supports maintaining a sense of perspective.
  • The business objectives and priorities are needed to identify meaningful security metrics. Rather than attempt to measure everything/anything and obsess about random trivia, focus on measuring the Stuff That Really Matters and make sure that (at least) is on track and under control. Assurance – being able to demonstrate, convincingly, that we’re on top of things - is one of the least obvious and yet most important benefits of an effective ISO27k ISMS. I’m not just talking about the certificate of compliance to ISO/IEC 27001 from an accredited certification body, but the confidence that assurance gives management, allowing the business to do what it needs to do (including exploiting its own information) safe in the knowledge that its valuable yet vulnerable information is sufficiently protected. [This is the old saw that racing cars need bloody good brakes: without strong, reliable, proven brakes, drivers would be far less confident and able to press hard into the corners.  Good brakes let cars go faster!]
  • The objectives probably include compliance with various externally-imposed obligations, particularly … but is the intention to ‘do the least amount we can reasonably get away with in this area’ or ‘satisfy and go beyond the compliance obligations because there is business advantage in doing so’? Are imposed compliance obligations simply constraints, or are there opportunities as well as risks in this area? I’m hinting here at aspects such as the organization’s branding and image, plus corporate social responsibility, plus grand strategic aims with long-term consequences. For example, compare a healthcare company that struggles to fulfill its legal obligations on privacy against one that easily surpasses those requirements and is able to reassure customers/patients as well as other stakeholders that it takes privacy seriously.
  • “Key” objectives are like milestones. They are aiming points, opportunities to make demonstrable progress by pushing in a certain direction, avoiding diversions and swamps by plotting a sensible route. Achieving an objective can be a major cause for celebration providing positive feedback that makes it a little easier to press on to the next one. So, if lower level objectives for business units, departments, teams etc., or for the ISMS, are linked to the organization's grand strategic objectives, it gives purpose and meaning to what we’re doing … and achieving objectives is a good excuse for a party!

Aug 1, 2019

NBlog Aug 1 - forensic mythbusters



We're currently researching for a future awareness module on forensics - a topic that has absolutely fascinated me since I was a kid through to my 20s as a geneticist (a "DNA scientist"). Naturally, for security awareness purposes, we'll be focusing on the use of forensics within the context of information risk and security ... but forensic science is all about information, including its availability and integrity, so our brief might yet widen.

Today I stumbled across The Innocence Network, a growing global movement to re-investigate dubious convictions, exonerate wrongly convicted people and press for improvements to criminal justice systems as appropriate. 

Wrongful convictions are a treble tragedy:
  1. An innocent person is punished for something they didn't do. This is unjust and harmful to the individual, plus their families and social networks.

  2. A guilty person often goes free. This typically flows from point 1. I say 'often' and 'typically' because sometimes a guilty person is convicted or punished in other ways, and occasionally there is no 'guilty person' in fact (e.g. where a genuine accident or mistake wrongly appears to have been deliberately caused by someone's actions or negligence). This is unjust and can be harmful to subsequent victims that might not have been harmed if the guilty person had been correctly convicted and effectively punished.

  3. The criminal justice system is materially harmed by every wrongful conviction, particularly but not only those that come to light. This is a serious societal issue that leads to a loss of trust in the system, occasionally even social disorder, vigilantism, revolt etc. Any civilized society has a low to zero tolerance for miscarriages of justice, especially in the most serious cases leading to severe punishments. The legal test "beyond reasonable doubt" hangs on the premise that the system should err on the side of caution: point 1 trumps point 2 above, in order both to achieve justice and to maintain a healthy social order.
From there, I bumped into some excellent videos about forensic techniques, including one debunking the myth of microexpressions and other pseudo-scientific ways allegedly capable of discerning liars from truthsayers - just some of the nonsense touted by egocentric social engineers, particularly those selling dodgy SE methods to a naive yet eager market. 

Who'd a thunk it, eh? Social engineers manipulating people. Imagine that.

PS  Those 'microexpressions' and other body-language signals that allegedly allow observant [highly trained and hence very expensive] 'specialists' to detect when someone is telling lies, in fact don't. The claims are misleading, causing more harm than good. Worse still, thanks to the meme, even some ordinary members of the general public believe they have superpowers:
"For many years, post-trial, I would ask jurors in federal cases, what made them think a particular witness was lying? They would reply that they knew the witness was lying because the witness touched their nose, looked away or up to the right, their skin flushed, touched their lips before answering, rubbed their thumbs together, licked their lips, scratched their ears, or shifted their jaw. Incredible, right? Imagine if that were your life on the line"  Joe Navarro
It is not merely a shame. It's a tragedy when innocent people are incarcerated as a result of this baloney. This is fake news writ large.