Welcome to NBlog, the NoticeBored blog

The blogging will continue until morale improves

Jul 13, 2019

NBlog July 13 - corporate infosec policy





















At the peak of the typical policy pyramid sits a ‘corporate information security policy’. In clause 5.2, ISO/IEC 27001 explicitly requires a high level policy, specifying related aspects such as demonstrable management commitment.

  • The usual boilerplate for any formal policy e.g. summary, applicability, version and date up front, plus responsibilities and references at the back;
  • A short introduction, using the pyramid diagram to outline the entire information security policy structure;
  • A set of seven principles (objectives) driving information risk and security e.g. “Information is a valuable business asset that must be protected against inappropriate activities or harm, yet exploited appropriately for the benefit of the organization.  This includes our own information and that made available to us or placed in our care by third parties.”;
  • A set of 35 policy axioms (key policy statements) derived from the control objectives in ISO/IEC 27002 with some modifications and extensions to the wording to suit this purpose.
The principles fascinate me. They aren’t (yet!) stated in any of the ISO27k standards, and yet these are fundamental concepts underpinning the entire field such as 'least privilege' and 'personal accountability'. In researching and preparing our corporate infosec policy, I dug out a bunch of principles from various places and rationalized them down to the present set. I’d like to revisit that sometime, maybe even prepare a paper about the principles and then propose either a new ISO27k standard or an appendix to, say, the information security governance standard ISO/IEC 27014.

Jul 11, 2019

NBlog July 11 - not playing by the rules

According to the BBC, British Airways has been fined £183m for last year's breach of the General Data Protection Regulation, dwarfing the previous record fines of £½m under the previous Data Protection Act.  

Ouch. Privacy compliance is now A Thing - A Very Big Scary Thing with Sharp Teeth, Claws and a Bad Attitude.

The prosecution and fine broadcasts a clear message that organizations are going to be held to account under GDPR for failing to prevent privacy breaches. I guess privacy officers, information risk and security managers, CISOs, CROs, CCOs and execs generally are now scrambling to gain assurance that their organizations are not going to end up in the same mess. And management at organizations which have suffered privacy breaches since GDPR came into effect, especially if they are currently under investigation or being prosecuted, must be quaking in their hand-made Italian leather boots. 

At 366 times the previous record, the BA fine is deliberately shocking. No wonder BA is talking about appealing the decision ... but it could have been even worse. Reportedly the fine was 1.5% of BA's global turnover, while the maximum for specific penalties under GDPR is 4%: that would have been an eye-watering £488m, or about US$600m

Gulp.

Airline profits are unusually volatile thanks to intense competition and factors largely outside management's control, such as fuel prices and significant incidents that affect the global travel industry. BA might conceivably need to call on its parent company or the banks for assistance to settle the bill without taking a corporate nose-dive. Even cancelling executive bonuses seems unlikely to be enough.

Having said that, any well-run organization will have identified, evaluated and treated their privacy and other information risks, including making contingency and other business continuity arrangements just in case serious incidents such as this occur. Compliance is a good reason to manage information risks professionally, on top of the many good business and social reasons for taking it seriously.

Jul 9, 2019

NBlog July - playing by the rules

This month's NoticeBored information security awareness module concerns compliance - not just complying with laws and regulations, but with rules as a whole including corporate policies, contracts, agreements and ethical codes.

The materials explore the different types of obligation or expectation, and degrees of compliance. It's not a purely binary issue, despite what some may think: compliance to the very letter of the law is different to willing agreement to fulfill the spirit of the regulations.



ISO27k audit planning


A thread on the ISO27k Forum about how to go about auditing an organization's Public Key Infrastructure set me thinking this morning.

The thread started with a question from PS:
"Could you please share some tips for auditing TLS/SSL arrangements within organisation?  Nessus will help us to identify weakness around configuration of crypto but if  I want to audit how sysadmins are creating self-signed certs and applying key management principles, how would I do that?"
In response, Ahmed provided some background information about PKI, followed by a fairly detailed and specific list of 15 auditable items, describing them as 'essential points':
  1. Audit for Root Certificate: how its managed, should be stored over secure hardware module (HSM) FIPS 140-2, if not how its secured.
  2. Assess CA Signing certificate (CA Private key) : How it is managed, secured, validity and key length.
  3. Audit System documentation to audit: (mentioned above) specially key management policy and procedures.
  4. Audit roles and segregation of duties
  5. Audit certificate templates: Issuing compliant certificates, SSL-TLS and Digital identity if valid that your are using two certificate templates or only SSL-TLS
  6. Audit for certificate (key usage) for individual certificates (Mail signing, authentication, encryption, etc.)
  7. Audit access control to the CA (should be subject to dual control)
  8. Audit CA Public key and intermediate certificates distribution, to assure its trusted over all systems.
  9. Audit for clock sync over used system
  10. audit system database security
  11. audit system backup and restore (for CA server, Configurations, HSM, root certificate, database)
  12. Audit for CRL publishing or cashing over systems
  13. Audit validity of issued certificates and how renewals are managed, to avoid human error to forgot to renew a certificate may cause system malfunction
  14. Audit the process itself for certificate issuing, renewal and revocation. (should be subject to dual control as maker checker)
  15. Audit certificate formats and extensions (PKCS formats and extensions)


Those 15 items may be a useful prompt or reminder but may not be appropriate in any given situation. ‘The essential points’ for a given audit are best determined in practice by the auditor/s using risk analysis, followed by detailed planning and prioritizing the audit work given the available resources (audit timescale plus auditor man-hours and skills).


An audit must reflect the audit objectives and scope, usually determined up-front by discussion between the audit and client management when the assignment is initiated and agreed. So, for instance, if the primary objective is to audit compliance of an ISMS with ISO/IEC 27001, the PKI is probably just a small part of that. However, if the prime objective is to audit the PKI, specifically, then a list of items similar to the 15 suggested by Ahmed may flow out of the audit risk analysis – or not: it all depends on the information risks, just as the ISMS and the PKI are driven by the risks.

As a general rule, relative risks are a good basis for prioritization: in essence, the idea is to tackle the most significant risks first and deepest, leaving lower risk matters for later, shallower review. That way, if the priority stuff turns out to be more problematic or to take longer than anticipated and resources are exhausted, the assignment can end knowing that the high priority areas have been done.

With a nearly infinite amount of potential audit work for finite resources, there are things that simply can’t be done right now. So, it pays to prioritize ‘the essentials’ and de-prioritize or park the remainder for another time. Keep notes in the audit file for use in planning future audits, along with previous audit reports, fieldwork notes, an updated risk analysis and other information sources (e.g. management reviews, incident reports etc.). This is continuous improvement for auditing.

An alternative or complementary audit planning approach is to come up with a small number of 'areas of concern', then invest an appropriate amount of audit resources into each one. Determining those 'areas' again depends on circumstances: one approach to a PKI audit might distinguish technical/cybersecurity stuff from physical and procedural aspects, for instance. Another might follow the lifecycle of a digital certificate, or concentrate on the individual departments and teams associated with the PKI, or pick up on incidents and known troublespots as routes in to the analysis, or ... whatever. There's even something to be said for deliberately planning each successive audit on a different basis, in order to avoid covering the same ground from the same perspective and hence missing the same issues (blindspots).

It's always worth reserving some time to explore interesting/concerning stuff that comes up in the course of the audit. For example, if the audit fieldwork uncovers issues with, say, key-management, it might be worth delving more deeply into key management both to find out if there is anything substantial and reportable in that specific area, and also as a worked example for the more general aspects such as policies, procedures, technical controls or whatever.  The key-management focus may not have been apparent during the original audit planning, although sometimes there are nonspecific clues about potential problem areas that feed into the risk analysis and planning (the auditor’s nose sniffing out trouble spots!). 

This is an example of contingency management: the audit work that needs to be performed partly depends on the circumstances or situation that unfolds in the course of the assignment. It can't all be pre-plannned.

It cuts both ways too. If the initial audit work goes better than planned, that leaves more time for other, lower priority matters, and might even result in concluding the audit early with a glowing audit report. 

Yes, it does happen!  Been there, done that!

Jun 20, 2019

NBlog June 20 - conspicuous consumption

A short article set me thinking this morning about the interplay between rights, compliance, personal freedoms, ethics and culture. The article is about tax authorities picking up on conspicuous consumption by citizens, suggesting that they are 'living beyond their means' - a classic fraud indicator.

Although the article specifically concerns disclosures through social media, that's just one of many ways of voluntarily disclosing information. Furthermore, some disclosures are involuntary: the authorities can demand information from and about us, for example, and we inadvertently or incidentally disclose information about ourselves in the course of living our lives.

The tax authorities have to address tax fraud, of course, using relevant information legitimately obtained from anywhere ... but in this situation the information was not disclosed for that specific purpose. Tax fraudsters would happily prohibit the authorities from accessing and using the information if they could. So is it ethical for the authorities to use it? Hmmm, tricky! 

I would argue that, in choosing to consume so conspicuously and publicly, tax fraudsters have made the information available to third parties and, by implication, third parties are free to use it legitimately. Preventing tax fraud is a legitimate purpose, so that's that.

Even if tax fraudsters explicitly prohibited the authorities from using the information disclosed through social media, I believe the laws about investigating crime take precedence (although I'm not a lawyer). Small print on the fraudsters' Facebook pages or blogs along the lines of "The tax authorities are expressly prohibited from using this information" would also be a bit of a giveaway!

Jun 18, 2019

NBlog June 18 - craftsmanship



Currently I'm getting things ready for the next consultancy gig. Figuratively speaking, having cleared a space on the workbench, I'm stocking up on raw materials and selecting tools for my toolbox. Literally, that's simply a new directory for the assignment, a few potentially useful templates and public information from the client, and a bunch of methods and techniques in mind.

My favourite tools are pre-loved and well-honed. They are familiar, comfortable and trustworthy. Some of them (such as the ISO27k standards) are off-the-shelf products. Others are either homebrewed or customized for particular purposes. They all have their advantages and disadvantages and, like any craftsman, I much prefer to use the right tool for the job, hence some specialist items are rarely used but invaluable for specific tasks. I make the effort to check and maintain my tools, from time to time investing in new ones or "improving" (well OK, refurbishing and adapting) old ones. Very rarely is a tool discarded, except for those that are plain worn out and are replaced, often by something shinier. My workshop is bulging, placing a premium on small/simple/multipurpose tools.

In many areas, ‘pragmatic’ approaches are the only tools available. It’s down to me to apply them to the tasks at hand with skill and passion, although it's hard to keep in mind their limitations. There's a tendency to press on regardless, leading to uncertain results and occasional accidents. I hate bodging things and yet that's an inevitable part of practicing and improving. 

A valuable routine at the end of any assignment is to look back and draw out the learning points. Those templates I mentioned are an example: having drafted, say, a standard form for describing security metrics, I use and gradually refine it on successive metrics until it stabilises. Every field on the form has a purpose. The structure, layout and sequence makes sense and works ... so it's worth turning into a template, an MS Word template in fact. The next time I'm describing a security metric, I can simply grab the template and start filling it in, avoiding the time and effort of starting from scratch.

Alternatively, if the client already has a structured way of describing security metrics, I can probably use that instead, perhaps proposing changes based on my experience but more likely adapting my approach to suit the client. Who knows, I might even learn some new tricks along the way, leading to an updated template. That's what I mean by investing in the tools of my trade, best practices you could say - well good-enough practices anyway. The quest for perfection is never ending.

Jun 14, 2019

NBlog June 14 - the compliance burden

I've spent an enjoyable day exploring, thinking and writing about the enormous breadth of "compliance", our security awareness topic for July. You might be forgiven for thinking that compliance in this context is just about hacking and privacy laws, but no, oh no: important though those are, there's much more to it.

We have thus far compiled a list of 15 categories of law relevant in some way to information security - not just 15 actual laws but 15 typesThere's a similar range of regulations, plus contracts and agreements. No wonder corporate lawyers, compliance teams and management as a whole complain about the compliance burden on businesses!

On top of that there are myriad internal corporate rules in the form of infosec-related policies, procedures and all that jazz - again, quite a variety when you think about it. 

And we all have self-imposed rules of behavior - our habits and conventions, codes of ethics, belief systems, rules for what's right and what's wrong.

Figuring out and talking about the different kinds of 'rules' will make an interesting awareness challenge for July. We've come up with more than 60 so far, and we're not done yet!

Aside from that, I've also been exploring the conceptual angle: what are rules for anyway? Why do we have rules, in general, let alone infosec, privacy and all those other rules I've alluded to? Why is compliance necessary? What's wrong with noncompliance? That led me along a tangent into creativity, again relevant to information if marginal to information security. 

Jun 12, 2019

NBlog June 12 - lack of control is not a vulnerability


Another of those apparently simple but quite profound questions came up on ISO27k Forum this morning. Juan from Peru said:
"Well, I am pretty confused about how to correctly describe a vulnerability. I´ve seen many sheets/registers (even a topic in this group) where a vulnerability is described as a "LACK OF A CONTROL" For example if I say that a VIRUS is a threat agent, my vulnerability would be a "LACK OF A VACCINE FOR THAT SPECIFIC VIRUS", this is quite redundant, I think but in a certain way, has sense. Now, I´ve also read that a Vulnerability CAN NOT be described as "LACK OF A CONTROL" because a Vulnerability is AN INHERENT WEAKNESS OF THE ASSET, which I think has more sense than the vaccine´s example. But, there is a problem, I could not find any official literature (I mean and ISO 27k) that supports that definition. I searched in ISO 27000 and 27005, and those Standards just say that a vulnerability is a weakness that can be exploit (Nothing about INHERENT). Also in ISO 27005 I found many examples of vulnerabilities (In the annex, I think) and they are described as "LACK OF CONTROLS". This is really confusing for me."
“Inherent weakness” is my succinct working definition of vulnerability. I use the word “inherent” to refer to issues within or integral to the system of concern (not necessarily an "asset"), in contrast to threats which (again, as I use the term in practice) are outside the system and impinge upon it. Furthermore, by ‘system’ I mean a coherent collection of things acting in concert - not just, say, an IT system (the computer hardware, firmware and software plus the data) but also the associated processes involved in using and administering it, and the users and administrators, the managers overseeing it, its owners/stakeholders … and so forth. I use 'system' in as broad a sense as “information security management system".

So why does my working definition of 'vulnerability' differ from that in ISO/IEC 27000:2018? Why don't I just use the formal definition? Good point ... but my reasoning is complicated to explain. Bear with me.

I'll start with the formalities. Among other terms, ISO/IEC 27000:2018 defines:

  • Control as “measure that is modifying risk (3.61)” plus 2 notes: "controls include any process (3.54), policy (3.53), device, practice, or other actions which modify risk (3.61); it is possible that controls not always exert the intended or assumed modifying effect."
  • Vulnerability as “weakness of an asset or control (3.14) that can be exploited by one or more threats (3.74)”. That definition is a little ambiguous* but I understand it to mean that a weakness of a control would constitute a vulnerability if it might be exploited;
  • Threat as "potential cause of an unwanted incident, which can result in harm to a system or organization (3.50)"; and
  • Risk as "effect of uncertainty on objectives (3.49)" plus 6 notes: "an effect is a deviation from the expected — positive or negative; uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood; risk is often characterized by reference to potential “events” (as defined in ISO Guide 73:2009, 3.5.1.3) and “consequences” (as defined in ISO Guide 73:2009, 3.6.1.3), or a combination of these; risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated “likelihood” (as defined in ISO Guide 73:2009, 3.6.1.1) of occurrence; in the context of information security management systems, information security risks can be expressed as effect of uncertainty on information security objectives; information security risk is associated with the potential that threats will exploit vulnerabilities of an information asset or group of information assets and thereby cause harm to an organization."
Unfortunately, there are numerous issues and ambiguities in those four definitions: 
  • Only a few of those words are explicitly defined in the standard - the ones that are used in a particular way within the context of the ISO27k standards ('terms of art' you could say). I believe we are meant to refer to the Oxford Dictionary definitions for the rest although this is not actually stated anywhere in the published standard: it is merely a convention, possibly stemming from ISO's directives to the drafting committees;
  • As formally defined, control distinguishes two concepts: controls ‘modify’ risks and hence (I would argue) are not part of them. You could consider them to be optional extras, add-ons that you may or may not want to use – at least that’s how I think of them although the definition doesn't actually say so;
  • The definition of vulnerability is ambiguously worded in the first clause. Does it mean "weakness of an asset, or weakness of a control" or "a weakness of an asset, or a control,"? I believe it is the former but that's just my interpretation - and ideally there should be little to no room for interpretation in a formal definition;
  • Threat seems quite straightforwardly defined (aside from referring to "system or organization", implying that they are distinct ... but one could argue that an organization is one type of system, a social system, often with a legal basis; 'system' is undefined). However, as Juan noted, even ISO/IEC 27005 misinterprets the term. A lack of control may modify the risk compared to its presence, but does not actually cause an incident: it is simply an omission. Incidents are caused by circumstances or acts - the commission of something, not omissions;
  • The definition of risk is particularly awkward and unsatisfactory. It is the product of a committee of people holding differing views, hence its extraordinary length. The definition part is so vague as to be almost meaningless, while the notes compound matters by mashing up several separate concepts. What a mess! It might not be so bad if 'risk' were peripheral to ISO27k but in fact quite the opposite. Risk (or rather, as I would prefer to put it, "information risk") is absolutely central. 
I personally do not consider lack of a control to be a vulnerability for several reasons:
  1. It makes it easier to consider and evaluate the underlying risks in a situation, deliberately ignoring any current or proposed controls during the analysis. It helps us distinguish risks from controls (as per the definition), and simplifies the risk analysis.
  2. Some existing or proposed controls may be unnecessary and unhelpful (e.g. periodic password changes), but we are less likely to consider that if we always take them for granted and assume they are present and working as intended in our risk analyses. Periodic password changes, for example, are a costly control incorporated into many systems for years without good reason other than habit or convention. In any given system, there may well be other controls that serve little to no purpose, perhaps even some that are counterproductive (they actually weaken rather than strengthen the system, perhaps opening up new avenues for attack or failure: antivirus software and automated software updates are two possible examples of this).
  3. The list of potential controls is unbounded. Aside from the large variety of possible types of controls, each control has many variants, and there is a huge (possibly infinite) variety of possible combinations and sequences of controls. So how are you going to determine which controls to add to, or exclude from, the list of missing/ineffective/inadequate controls? Answering that question presupposes that you understand the risks, in other words it is a circular or self-referential issue. 
  4. It allows/encourages us to figure out which controls we require according to the risks we have identified and evaluated. It also suggests a natural priority or ranking of the controls, since those controls mitigating the most significant risks are clearly important (‘key’) controls. This has substantial implications that are not widely considered, at present e.g. resilience, effectiveness and assurance are likely to be strong requirements for key controls.
  5. Controls are not 100% reliable – in other words, there are risks associated with the controls themselves, as implied by the second note to the '27001 definition of control. This again complicates the risk analysis and (in my experience) is usually ignored … but that’s a mistake, particularly in the case of key controls. The possibility of key controls failing to operate as intended or as required means significant risks might be insufficiently mitigated in practice. Now you might say that this means 'control reliability' therefore ought to be part of the risk analysis, in addition perhaps to 'control suitability', 'control value' and maybe other considerations. Personally, I prefer to address this separately in the risk management process, particularly in the phase following the decisions about how to treat the identified and evaluated risks, plus in the ongoing management, measurement and assurance activities once the controls are in use.
Another way to look at this is that a missing, weak, inadequate, failing or inappropriate control, exposes or fails to compensate for a vulnerability ... but 'exposure and 'compensating control' are ambiguous and confusing concepts too. Maybe I'll come back to that another day.

So, that's it for today. Sorry to be so anal about the words and definitions, but Juan is certainly not the only confused soul - even ISO/IEC JTC1/SC27 has trouble with this stuff! 

Jun 11, 2019

NBlog June 11 - resistance is futile

Generally speaking, there's no point in complaining about applicable laws and regulations: like it or not, compliance is obligatory. That's not the end of the matter though: it's not as simple as that. For starters, there are questions about precisely what the obligations are, their applicability, and the potential consequences of noncompliance.

Those questions are all the more interesting in respect of other kinds of rules, especially those that are not written formally by highly trained lawyers following strict drafting practices finely honed over hundreds of years - corporate security policies for instance. 

Positioning compliance as a business or risk management issue puts a different spin on things. One particularly worthwhile approach is to elaborate on and explore the objectives behind the wording of the rules. Why is it considered necessary to protect someone's privacy, for example? What might happen if personal information was unrestricted, freely available, a commodity that could be freely shared or traded? Such questions are trickier to answer than they might appear.

Consider the actual real-world effects of "major" privacy breaches such as the Target incident in 2013. Aside from the public outcry or outrage, the enforcement penalties and various other costs relating to the clean-up, the organizations concerned are mostly still operating ... but are they the same, or have the incidents changed things? And what if any are the effects on the rest of us?

One difference stems directly from the media coverage of major incidents, specifically headline news raises awareness of the related issues among the general population and management, right up to executive level. But once the furor has died down, awareness tends to subside gradually back towards pre-incident levels - maybe a little higher due to the residual memories and reminders such as this very piece! 'A little more awareness', then, is the net, long-term effect of incidents on those not directly affected, perhaps also the individual and corporate victims who were involved.

'A little more awareness' is the least we can reasonably expect to achieve through security awareness and training activities - hopefully more than just 'a little', of course! Repeatedly topping-up on awareness levels is the approach we have taken for decades: regular refreshers work for us, in the same way that each subsequent privacy breach reminds us, yet again, that there are compliance obligations in that area. It's a ratchet or cumulative effect, each episode raising the level by some amount. 

Jun 10, 2019

NBlog June 10 - playing by the rules

Compliance is our security awareness and training topic for July.  As usual, we'll be taking a deliberately broad perspective, finding angles of interest to staff, management and professionals.

'Playing by the rules' hints at how we're planning to address the staff awareness stream. People who enjoy playing all sorts of quizzes, competitions, games and sports appreciate that the rules are there to level the playing field, keeping things reasonably fair to all concerned. That leads on to the concept of rule-bending and breaking i.e. cheating to gain an unfair advantage over other players. 'The rules of the road' suggest another possible avenue to explore around safety and security, picking up on this month's awareness topic (physical infosec).

The management stream will also dip into rule-making, the process of defining rules, plus enforcement and reinforcement of the rules. In the information security context, the rules include laws, regulations, policies, directives, instructions, contractual terms and more, some very narrowly scoped and others much more general in nature. We might even take a tangent into actively exploiting lax rules for business advantage, raising ethical and risk questions worth pondering.

The pro stream will get into technological rules such as cybersecurity standards, tech protocols and firewall rulesets ...

... at least, that's our cunning plan at this point. Part of the fun of providing the NoticeBored security awareness and training service is to get creative with the messages, picking up on topical issues. We're on the lookout for interesting compliance-related news during June - incidents, changes, and different approaches to the age-old problems in this area. 

Jun 4, 2019

NBlog June - physical information security


June’s security awareness and training topic from NoticeBored is an interesting blend of traditional physical/site security and cybersecurity, with just a touch of health and safety to spice things up.
Hot on the heels of May’s module about working off-site, this month we’re exploring the risks and controls applicable to physical information assets such as:
  • ICT devices e.g. servers, laptops, phones, network cables, microwave dishes;
  • Hardware security devices and controls e.g. keys, staff passes, cryptographic key-fobs, walls, fences/barriers, turnstiles, locks/padlocks, smoke detectors, fire and flood alarms …;
  • Information storage media e.g. hard drives, USB sticks, tapes, papers;
  • Information communication and display devices e.g. screens, management panels, annunciators, modems;
  • People – particularly “knowledge workers” employed for their intellectual capacity, expertise and skills, implying a business need to ensure their health and safety.
Physically securing information assets is just as important as the logical security controls (cybersecurity) normally considered. Adversaries with physical access to ICT devices may be able to defeat/reset the logical security controls, power down or damage them, substitute or simply make off with them. 
Card skimmers on bank ATMs are an example of a physical threat to information - namely the card data and PIN codes used to authenticate card holders.
Crime investigators sometimes employ physical techniques to obtain forensic evidence from devices and media recovered from the scenes of crime, so it’s not all bad news!
The physical harm that can impact information includes:
  • Theft or loss by insiders, intruders/burglars, thieves, industrial spies, vandals and saboteurs;
  • Tailgating or physical intrusion, allowing intruders to observe, copy, steal, replace or damage information assets (both physical and digital) on-site;
  • Damage - criminal or accidental such as fires, floods, storms, lightning, static electricity, voltage surges and power cuts, electromagnetic disturbances and radio interference, mold;
  • Mechanical/electronic failure or obsolescence, ICT equipment prematurely becoming unreliable, intermittent or failing completely, especially if it has been stored or used under adverse physical conditions such as high temperatures, vibration or corrosive atmospheres;
  • Subversive hardware e.g. covert surveillance using microphones and cameras built -in to many IT devices, installation of bugs and wireless network taps;
  • Interception, compromise and failure of both wired and wireless networks;
  • Compromise of technological security controls e.g. reset device to factory defaults, replace firmware or hack the hardware, disable security controls, and copying/cloning/counterfeiting of inadequately secured authentication devices (such as credit cards and passports);
  • Illness, accident, death, coercion, bribery and corruption etc. of workers, including injuries and stress, depression and other potentially devastating forms of mental ill-health.
Physically securing information involves: physical access controls; fire, smoke and flood protection; redundant/spare equipment, supplies, communications routes and people; UPSs, generators, spare batteries; lightning conductors, surge arrestors etc.; health and safety plus welfare arrangements for workers; laws, policies, agreements and other rules and regulations; physical security-related processes and activities ... including security awareness of course!
Don't bother contacting us if your people are all fully up-to-speed on the physical side of information security as outlined here. If your management already understands the need and willingly invests in physical security controls, good on you. If you and your professional colleagues actively encourage and enable the implementation of physical controls, excellent! Otherwise, we're keep to help.

May 31, 2019

NBlog May 31 - stresses and strains

Well, that's another deadline hit: June's NoticeBored module was completed, checked and delivered today as planned. 

The stress built to a crescendo mid-morning before rapidly subsiding as the final proofreading was completed, the last bit of polish applied and everything came together nicely, albeit just in the nick of time. We're cutting it fine this time!

It's our version of a just-in-time production process. The product is as fresh and topical as it could possibly be, short of near-real-time delivery to customers as events unfold anyway, a service that is already available from a plethora of news sites, aggregators, search engines and blogs just like this one. That's fine except that infosec incidents don't happen in a nice tidy sequence, one topic at a time!

I'm not expecting sympathy, really. The end-of-month deadline and monthly cycle are our choices. We have a degree of control although NoticeBored subscribers have signed up for the regular monthly service as described, so naturally we are compelled to do what we promised. To be honest, though, it suits us just fine: after a month-long slog on any topic, we're over it. After an appropriate break to de-stress (avoiding di-stress!), we're looking forward to moving on to the next topic, the next thrilling installment.

So, there will now be a short interlude before I blog about June's completed module. We have a long weekend ahead with the Queen's birthday on Monday. I must just check the post to see if we've been invited to the party ...

May 29, 2019

NBlog May 29 - physical security culture


The corporate security culture is something we absorb gradually through various encounters or interactions with an organization and its people. Specifically regarding the physical aspects of an organization's security culture, hundreds of installation security audits have taught me to open my eyes wide whenever I approach an organization's premises for the first time, starting well before I reach the visitor parking area, guard house, foyer or reception.

Some organizations' buildings are proudly lit up with the company name in neon. Some are simply so large that everyone for miles around knows exactly who they are and has a pretty good idea what they are doing. I used to work for an electricity generator company: most - but not all - of the power stations are landmarks, some would say blots on the landscape. 

In contrast, some organizational premises are more discreet, perhaps hard to find without an address and maybe a glance at Google's satellite images (hmmm, now there's a vulnerability).

A few seem to have done their very best to disappear, with no signs, sometimes not even windows. As I write these words, I have in mind a particularly forbidding concrete building in a city commercial area that screams "Sensitive!". Paradoxically, it attempts to be so discreet relative to all the ordinary commercial buildings in the area that it stands out a mile. I'm intrigued but it's not my business to find out who they are and what they do, nor to point them out so I'm not going to say any more about them.

That advice quoted above from the NZ PSR is pertinent, especially the emphasis on security culture. In fact, if workers are sufficiently clued-up to be vigilant and responsive, then even a tailgater, a wandering maintenance engineer or a "lost" interviewee allegedly searching for the toilets, is likely to catch someone's attention ... at some point. The duration of an intrusion might make a good security metric if it could be measured, although there are many complicating factors so the data would be noisy. On top of that, the most successful intrusions will never be identified as such, let alone timed!

Those physical installation audits I mentioned are a more reliable and less risky way to generate useful metrics, provided the auditor has sufficient expertise and experience anyway. It's one of many areas where the auditor's independence comes to the fore: a good, vigilant auditor will spot and point out issues that most workers either fail to notice at all, or simply ignore just as they and everyone else has always done. Repeated exposure makes them blind to stuff, especially if the prevailing opinion is "It's nothing" or "Don't worry about it: it's not even your responsibility". 

We'll be tackling those ignorant and dismissive attitudes head-on in June's NoticeBored module, as we always do. Compared to the cybersecurity topics, it's relatively easy to explain physical security issues, persuading workers to see stuff and react accordingly. Hopefully promoting vigilance, responsiveness and resilience on physical security will have benefits across the board in terms of the corporate security culture overall. It's a good start anyway.

May 27, 2019

NBlog May 27 - physical infosec

As we plummet rapidly towards our usual end of month deadline to deliver the next NoticeBored security awareness and training module, the scope is finally stabilizing. June's module will cover these four aspects:
  1. Physical information assets meaning the hardware processing, communicating and storing information in all forms;
  2. Physical information risks involving tangible, real-world threats, vulnerabilities and/or impacts;
  3. Physical information security controls protecting various information assets;
  4. Management of the above physical issues within the broader context of managing information risk and security, business management, compliance, corporate governance and so on.

Balanced delicately on the edge of our scope is a fifth aspect: health and safety. It is our contention that workers, especially 'knowledge workers', qualify as valuable yet vulnerable information assets just as much as, say, databases. Workers receive, process and output information, in some cases generating and expressing new information (e.g. intellectual property such as creative concepts and designs). As such, protecting workers' health and safety is an information security issue, not merely a matter of ethics, compliance, productivity or whatever. 

In particular, workers' mental health is, we feel, directly relevant and well worth addressing. In practice, it's generally an issue for the workers themselves, plus corporate functions such as HR and/or Health And Safety, plus 'management' as a whole. 

Our intent in raising health and safety within the NoticeBored materials is not to trigger corporate turf wars but to raise awareness, set people thinking and encourage collaboration. There are information risks here, so let's take a closer look to see what, if anything, we ought to be doing to understand, evaluate, treat and manage them, or to help/guide those who are responsible.

May 26, 2019

NBlog May 26 - management == risk management


I'm intrigued by the idea that management is risk management, hence today's blog. 

Management primarily involves dealing with possibilities and uncertainties, determining objectives and influencing or guiding things in the preferred directions, driving things along unclear paths towards uncertain goals.

Man-management (or, to be politically-correct, personnel or human resources management) is about herding the organization's cats, guiding and motivating people in order to get the best out of them, gaining their loyalty, productivity and creativity - lots of risks and uncertainties there! 

Despite the intent of clear management instructions, policies, rules and directives ("Do this" and "Don't do that", or "Make it so!"), there's a degree of vagueness and complexity in how things actually turn out in practice. In particular, the future is inherently uncertain. Things don't always go to plan, but planning is essential.

On that basis, therefore, managers should be familiar with the concepts underpinning risk and risk management. It's not unreasonable to assume they grasp the basics anyway, hence 'information risk management' should not be entirely alien to any manager. 

It’s not all plain sailing, though, as there are differences in the terminology, emphasis and approach in different fields:
  • Financial risk concerns the upside as well as downside, opportunities to profit as well as the possibility of loss. The common unit of analysis is currency, hence everything gets reduced to dollars and cents. Volatility is an important aspect, along with systemic risk and dependencies;
  • Health and safety risk and environmental risk both concern ‘hazards’ i.e. dangerous situations that may cause physical harm, injury or death, either to individual workers or more broadly to the biosphere;
  • Strategic risk concerns big-picture stuff affecting the organization’s overall objectives and survival – existential ‘bet the farm’ risks in some cases – over the medium to long term. Again, it's about risk and reward, taking calculated risks, gambling with a loaded deck;
  • Commercial risk takes a wider perspective on supply chains/networks including the relationships with competitors, peers, partners, suppliers, customers etc. and their predicted future actions including responses to our moves and vice versa. It involves collaboration as well as competition and factors such as branding and positioning, products and markets, pricing and profitability, quality and price, creativity and innovation, reliability and dependability ...;
  • Compliance risk takes account of the probability of being caught out, hence downplaying or even deliberately concealing incidents is a legitimate (if unethical) approach: it's not just about being "fully compliant"!;
  • Privacy risk is myopically focused on preventing the inappropriate disclosure or corruption of personal information, while at the same time using (exploiting!) it for business purposes, a delicate balance;
  • Engineering risk is mostly about the laws of physics e.g. how far can we go in terms of reducing material strength, weight, thickness, resilience etc. without breaching safety margins or commercial objectives. It includes comparing and contrasting different approaches such as production methods and techniques, evaluating and choosing between designs, optimization of various parameters for continuous improvement and quality assurance.

Q. Are those differences risks or opportunities? 

A.  No, they are both - risks and opportunities. 

Information risk management can benefit from the different emphases, broadening the scope of analysis. Assembling a diverse team of managers to explore information risks, for example may lead to additional insight and novel approaches, beyond what the information risk and security management professionals alone might achieve - identifying additional risks, perhaps, grouping risks in different ways, altering the priorities or plans for risk treatment. A significant advantage derives simply from involving managers from across the organization, with first-hand knowledge of business situations, pressures and concerns, constraints and objectives. At the same time, the various interpretations and approaches for managing risk may be disconcerting for any participants with narrow perspectives based on years of experience in their fields of expertise ... suggesting the value of first raising awareness across the board, clarifying expectations and spending a little time discussing these aspects when organizing information risk workshops.