Welcome to NBlog, the NoticeBored blog

The blogging will continue until morale improves

Sep 17, 2019

NBlog Sept 17 - a fraudulent fraud report?



Our next awareness module on digital forensics is coming along nicely. Today, in the course of researching forensics practices within organizations, I came across an interesting report from the Association of Certified Fraud Examiners. As is my wont, I started out by evaluating the validity of the survey on which it is based, and found this:
"The 2018 Report to the Nations is based on the results of the 2017 Global Fraud Survey, an online survey opened to 41,573 Certified Fraud Examiners (CFEs) from July 2017 to October 2017. As part of the survey, respondents were asked to provide a narrative description of the single largest fraud case they had investigated since January 2016. Additionally, after completing the survey the first time, respondents were provided the option to submit information about a second case that they investigated.
Respondents were then presented with 76 questions to answer regarding the particular details of the fraud case, including information about the perpetrator, the victim organization, and the methods of fraud employed, as well as fraud trends in general. (Respondents were not asked to identify the perpetrator or the victim.) We received 7,232 total responses to the survey, 2,690 of which were usable for purposes of this report. The data contained herein is based solely on the information provided in these 2,690 survey responses."
"2018 Report to the Nations", ACFE (2018)
OK so more than half the submitted responses were deemed unusable. That's a lot more rejects than I would normally expect for a survey which could be good, bad or indifferent: 

  • It's good if they were excluded for legitimate reasons such as being patently incomplete, inaccurate, out of scope or late - like spoiled votes in an election; 
  • It's bad (surprising and disappointing) if they were excluded illegitimately such as because they failed to support or refute some working hypothesis or prejudice;
  • It's indifferent if they were excluded for purely practical reasons e.g. they ran out of time to complete the analysis. Hopefully they used an unbiased sampling technique to trim down the data though. Perhaps the unusable responses were simply lost or corrupted for some reason.

Unfortunately, the reasons for exclusion aren't stated in the report, which to me is an unnecessary and avoidable flaw. We're reduced to guesswork. That they excluded so many responses could for instance indicate that the survey team was unusually cautious, excluding potentially as well patently dubious submissions. It could be that the survey method was changed for some reason during the survey, and the team decided to exclude those before and/or after the chosen method was used (begging further questions about what changed and how they chose the method/s).

The fact that this report comes from the ACFE strongly suggests that both the analytical methods and the team are trustworthy. Personal integrity is essential to be a professional fraud examiner, a fundamental requirement. Furthermore, they have at least disclosed the number of responses used and provide additional details in the report about the respondents. So, on balance, I'm willing to trust the report: to be clear, I do NOT think it is fraudulent! In fact, with 2,690 responses, the findings carry more weight than most vendor-sponsored "surveys" (advertisements) that I've criticized several times before.

Moving forward, I'm exploring the findings for tidbits relevant to security awareness programs, doing my level best to discount the ridiculous "infographics" they've used in the report - another unnecessary and avoidable source of bias, in my jaundiced opinion. Yes, the way metrics are reported does influence their interpretation and hence value. And no, I don't think it's necessary to resort to gaudy crayons to put key points across. Some of us aren't scared by lists, tables and graphs.

Sep 13, 2019

NBlog Sept 13 - ISO27k ambiguities


ISO/IEC 27001 concerns at least* two distinct classes of risk - ISMS risks and information risks** - causing confusion. With hindsight, the ISO/IEC JTC1 mandate to require a main-body section ambiguously titled "Risks and opportunities" in all the certifiable management system standards was partly to blame for the confusion, although the underlying issue pre-dates that decision: you could say the decision forced the U-boat to the surface.

That is certainly not the only issue with '27001. Confusion around the committee's and the standard's true intent with respect to Annex A remains to this day: some committee members, users and auditors believe Annex A is a definitive if minimalist list of infosec controls, hence the requirement to justify Annex A exclusions ... rather than justify Annex A inclusions. It is strongly implied that Annex A is the default set. In the absence of documented and reasonable statements to the contrary, the Annex A controls are presumed appropriate and necessary ... but the standard’s wording is quite ambiguous, both in the main body clauses and in Annex A itself.

In ISO-speak, the use of ‘shall’ in "Normative" Annex A indicates mandatory requirements; also, main body clause 6.1.3(c) refers to “necessary controls” in Annex A – is that ‘necessary for the organization to mitigate its information risks’ or ‘necessary for compliance with this standard and hence certification’?

Another issue with '27001 concerns policies: policies are mandated in the main body and recommended in Annex A. I believe the main body is referring to policies concerning the ISMS itself (e.g. a high-level policy - or perhaps a strategy - stating that the organization needs an ISMS for business reasons) whereas Annex A concerns lower-level information security-related policies … but again the wording is somewhat ambiguous, hence interpretations vary (and yes, mine may well be wrong!). There are other issues and ambiguities within ISO27k, and more broadly within the field of information risk and security management.

Way down in the weeds of Annex A, “asset register” is an ambiguous term comprised of two ambiguous words. Having tied itself in knots over the meaning of “information asset” for some years, the committee eventually reached a truce by replacing the definition of “information asset” with a curious and unhelpful definition of “asset”: the dictionary does a far better job of it! In this context, "register" is generally understood to mean some sort of list or database ... but what are the fields and how much granularity is appropriate? Annex A doesn't specify.

But wait, there’s more! The issues extend beyond '27001. The '27006 and '27007 standards are (I think!) intended to distinguish formal compliance audits for certification purposes from audits and reviews of the organization’s information security arrangements for information risk management purposes. Aside from the same issue about the mandatory/optional status of Annex A, there are further ambiguities tucked away in the wording of those standards, not helped by some committee members’ use of the term “technical” to refer to information security controls, leading some top open the massive can-o-worms labelled “cyber”!

Having said all that, we are where we are. The ISO27k standards are published, warts and all. The committee is doing its best both to address such ambiguities and to maintain the standards as up-to-date as possible, given the practical constraints of reaching consensus among a fairly diverse global membership using ISO’s regimented and formal processes, and the ongoing evolution of this field. Those ambiguities can be treated as opportunities for both users and auditors to make the best of the standards in various contexts, and in my experience rational negotiation (a ‘full and frank discussion’) will normally resolve any differences of opinion between them. I’d like to think everyone is ultimately aligned on reaching the best possible outcome for the organization, meaning an ISMS that fulfills various business objectives relating to the systematic management of information risks. 


* I say ‘at least’ because a typical ISMS touches on other classes of risk too (e.g. compliance risks, business continuity risks, project/programme management risks, privacy risks, health and safety risks, plus general commercial/business risks), depending on how precisely it is scoped and how those risk classes are defined/understood. 

** I’ve been bleating on for years about replacing the term “information security risk”, as currently used but not defined as such in the ISO27k standards, with the simpler and more accurate “information risk”.  To me, that would be a small but significant change of emphasis, reminding all concerned that what we are trying to protect - the asset - is, of course, information. I’m delighted to see more people using “information risk”. One day, maybe we’ll convince SC27 to go the same way!

Sep 12, 2019

NBlog Sept 12 - metrics lifecycle management


This week, I'm thinking about management activities throughout the metrics lifecycle.

Most metrics have a finite lifetime. They are conceived, used, hopefully reviewed and maybe changed, and eventually dropped or replaced by something better. 

Presumably weak/bad metrics don't live as long as strong/good ones - at least that's a testable hypothesis provided we have a way to measure and compare the quality of different metrics (oh look, here's one!).

Ideally every stage of a metric's existence is proactively managed i.e.:
  • New metrics should arise through a systematic, structured process involving analysis, elaboration and creative thinking on how to satisfy a defined measurement need: that comes first. Often, though, the process is more mysterious. Someone somehow decides that a particular metric will be somewhat useful for an unstated, ill-defined and barely understood purpose;
  • Potential metrics should be evaluated, refined, and perhaps piloted before going ahead with their implementation. There are often many different ways to measure something, with loads of variations in how they are analyzed and presented, hence it takes time and effort to rationalize metrics down to a workable shortlist leading to final selection. This step should take into account the way that new or changed metrics will complement and support or replace others, taking a 'measurement system' view. Usually, however, this step is either skipped entirely or superficial. In my jaundiced opinion, this is the second most egregious failure in metrics management, after the previous lack of specification;
  • Various automated and manual measurement activities operate routinely during the working life of a metric. These ought to be specified, designed, documented, monitored, controlled and directed (in other words managed) in the conventional manner but rarely are. No big deal in the case of run-of-the-mill metrics which are simple, self-evident and of little consequence, but potentially a major issue (an information risk, no less) for "key" metrics supporting vital decisions with significant implications for the organization;
  • The value of a metric should be monitored and periodically reviewed and evaluated in terms of its utility, cost-effectiveness etc. That in turn may lead to adjustments, perhaps fine-tuning the metric or else a more substantial change such as supplementing or dropping it. More often (in my experience) nobody takes much interest in a metric until/unless something patently fails. I have yet to come across any organization undertaking 'preventive maintenance' on its information risk and security metrics, or for that matter any metrics whatsoever - at least, not explicitly and openly. 
  • If a metric is to be dropped (retired, stopped), that decision should be made by relevant management (the metric's owner/s especially), taking account of the effect on management information and any decision-making that previously relied upon it ... which implies knowing what those effects are likely to be. In practice, many metrics circulate without anyone being clear about who owns or uses them, how and what for. It's a mess.
Come on, this is hardly rocket surgery. Information risk and security metrics are relatively recent additions to the metrics portfolio so it's not even a novel issue, and yet I feel like I'm breaking new ground here. Oh oh.

I should probably research fields such as finance and engineering with mature metrics, for clues about good metrics management practices that may be valuable for the information risk and security field.

Sep 11, 2019

NBlog Sept 11 - what it means to be risk-driven


Since ISO27k is [information] risk-driven, poor quality risk management is a practical as well as a theoretical problem. 

In practical terms, misunderstanding the nature of [information] risk, particularly the ‘vulnerability’ aspect, leads to errors and omissions in the identification, analysis and hence treatment of [information] risks. The most common issue I see is people equating ‘lack of a control’ with ‘vulnerability’. To me, the presence or absence of a control is quite distinct from the vulnerability, in that vulnerability is an inherent weakness or flaw in something (e.g. an IT system, an app, a process, a relationship, contract or whatever. Even a control has vulnerabilities, yet we tend to forget or discount or simply ignore the fact that controls aren’t perfect: they can and do fail in practice, with several information risk management implications). Think about it: when was the last time you seriously considered the possibility that a control might fail? Did you identify, evaluate and treat that secondary risk, in a systematic and formal manner … or did you simply get on with things informally? Have you ever done a risk analysis on your “key controls”? Do you actually know which of your organization’s controls are “key”, and why? That's a bigger ask than you may think. Try it and you'll soon find out, especially if you ask your colleagues for their inputs.

In theoretical terms, risk is all about possibilities and uncertainties i.e. probability. Using simplified models with defined values, it may be technically possible to calculate a precise probability for a given situation under laboratory conditions, but that doesn’t work so well in the real world which is more complex and variable, involving factors that are partially unknown and uncontrolled. We have the capability to model groups of events, populations of threat actors, types of incident etc. but accurately predicting specific events and individual items is much harder, verging on impossible in practice. So even extremely careful, painstaking risk analysis still doesn’t generate absolute certainty. It reduces the problem space to a smaller area (which is good!), but not to a pinpoint dot (such precision that we would know what we are dealing with, hence we can do precisely the right things). What’s more, ‘extremely careful’ and ‘painstaking’ implies slow and costly, hence the approach is generally infeasible for the kinds of real-world situations that concern us. Our risk management resources are finite, while the problem space is large and unbounded. The sky is awash with risk clouds, and they are all moving!

Complicating things still further, we are generally talking about ‘systems’ involving human beings (individuals and organizations, teams, gangs, cabals and so on), not [just] robots and deterministic machines. Worse, some of the humans are actively looking to find and exploit vulnerabilities, to break or bypass our lovely controls, to increase rather than decrease our risk. The real-world environment or situation within which information risks exist is not just inherently uncertain but, in part, hostile. 

So, in the face of all that complexity, there is obviously a desire/need to simplify things, to take short cuts, to make assumptions and guesses, to do the best we can with the information, time, tools and other resources at our disposal. We are forced to deal with priorities and pressures, some self-imposed and some imposed upon us. ISO27k attempts to deal with that by offering ‘good practices’ and ‘suggested controls’. One of the ‘good practices’ is to identify, evaluate and treat [information] risks systematically within the real-world context of an organization that has business objectives, priorities and constraints. We do the best we can, measure how well we’re doing, and seek to improve over time.

At the same time, despite the flaws, I believe risk management is better than specified lists of controls. The idea of a [cut down] list of information security controls for SMEs is not new e.g. “key controls” were specifically identified with little key icons in the initial version of BS7799 I think, or possibly the code of practice that preceded it. That approach was soon dropped because what is key to one organization may not be key to another, so instead today’s ISO27k standards promote the idea of each organization managing its own [information] risks. The same concerns apply to other lists of ‘recommended’ controls such as those produced by CIS, SANS, CSA and others, plus those required by PCI-DSS, privacy laws and other laws, regs and rulesets including various contracts and agreements. They are all (including ISO27k) well-meaning but inherently flawed. Better than nothing, but imperfect. Good practice, not best practice.

The difference is that ISO27k provides a sound governance framework to address the flaws systematically. It’s context-dependent, an adaptive rather than fixed model. I value that flexibility.

Sep 6, 2019

NBlog Sept 6 - the CIA triad revisited

I've swapped a couple of emails this week with a colleague concerning the principles and axioms behind information risk and security, including the infamous CIA triad

According to some, information security is all about ensuring the Confidentiality, Integrity and Availability of information ... but for others, CIA is not enough, too simplistic maybe.


If we ensure the CIA
of information, does that
mean 
it is secure?


Towards the end of the last century, Donn Parker proposed a hexad, extending the CIA triad with three (or is it four?) further concepts, namely:
  • Possession or control;
  • Authenticity; and 
  • Utility. 
An example illustrating Donn's 'possession or control' concept/s would be a policeman seizing someone's computer device intending to search it for forensic evidence, then finding that the data are strongly encrypted. The police physically possess the data but, without the decryption key, are denied access to the information. So far, that's simply a case of the owner using encryption to prevent access and so prevent availability of the information to the police, thereby keeping it confidential. However, the police might yet succeed in guessing or brute-forcing the key, or exploiting a vulnerability in the encryption system (a technical integrity failure), hence the owner is currently less assured of its confidentiality than if the police did not possess the device. Assurance is another aspect of integrity

Another example concerns intellectual property: although I own and have full access to a physical book, I do not generally have full rights over the information printed within. I possess the physical expression, the storage medium, but don't have full control over the intangible intellectual property. The information is not confidential, but its availability is limited by legal and ethical controls, which I uphold because I have strong personal integrity. QED

Personally, I feel that Donn's 'authenticity' is simply an integrity property. It is one of many terms I've listed below. If something is authentic, it is true, genuine, trustworthy and not a fake or counterfeit. It can be assuredly linked to its source. These aspects all relate directly to integrity.

Similarly, Donn's 'utility' property is so close as to be practically indistinguishable from availability. In the evidence seizure example, the police currently possess the encrypted data but lacking the key or the tools and ability to decrypt it, the information remains unavailable. There are differences between the data physically stored on the storage medium and the intangible information content, sure, but I don't consider 'utility' a distinct or even useful property.

Overall, the Parkerian hexad is an interesting perspective, a worthwhile challenge that doesn't quite make the grade, for me. That it takes very specific, carefully-worded, somewhat unrealistic scenarios to illustrate and explain the 3 additional concepts, scenarios that can be readily rephrased in CIA terms, implies that the original triad is adequate. Sorry Donn, no cigar.

In its definition of information security, ISO/IEC 27000 lays out the CIA triad then notes that "In addition, other properties, such as authenticity, accountability, non-repudiation, and reliability can also be involved". As far as I'm concerned, authenticity, accountability and non-repudiation are all straightforward integrity issues (e.g. repudiation breaks the integrity of a contract, agreement, transaction, obligation or commitment), while reliability is a mix of availability and integrity. So there's no need to mention them, or imply that they are somehow more special than all the other concepts that could have been called out but aren't even mentioned ....

Integrity is a fascinatingly rich and complex concept, given that it has a bearing on aspects such as:
  • Trust and trustworthiness;
  • Dependability, reliability, confidence, 'true grit' and determination; 
  • Honesty, truthfulness, openness; 
  • Authenticity, cheating, fraud, fakery, deception, concealment …; 
  • Accuracy and precision, plus corruption and so forth; 
  • Timeliness, topicality, relevance and change; 
  • Rules and obligations, prescriptions, expectations and desires, as well as limitations and constraints; 
  • Certainty and doubt, risk, probability and consequences; 
  • Accidents, mistakes, misinterpretations and misunderstandings; 
  • Compliance and assurance, checks and balances; 
  • Consistency, verifiability, provability and disprovability, proof, evidence and fact - including non-repudiation; 
  • Social and cultural norms, conventions and ‘understandings’; 
  • Personal/individual values, ethics and morals, plus social or societal aspects such as culture and group-think; 
  • Enforcement (through penalties) and reinforcement (through awareness and encouragement) of obligations, rules, expectations etc.; 
  • Reputation, image and credibility - very important and valuable in the case of brands, for instance. 
Confidentiality is pretty straightforward, although sometimes confused with privacy.  Privacy partially overlaps confidentiality but goes further into aspects such as modesty and personal choice, such as a person's right to control disclosure and use of information about themselves.

Availability is another straightforward term with an interesting wrinkle. Securing information is as much about ensuring the continued availability of information for legitimate purposes as it is about restricting or preventing its availability to others. It's all too easy to over-do the security controls, locking down information so far that it is no longer accessible and exploitable for authorized and appropriate uses, thereby devaluing it. Naive, misguided attempts to eliminate information risk tend to end up in this sorry state. "Congratulations! You have secured my information so strongly that it's now useless. What a pointless exercise! Clear your desk: you're fired!"

Summing up, the CIA triad is a very simple and elegant expression of a rather complex and diffuse cloud of related issues and aspects. It has stood the test of time. It remains relevant and useful today. I commend it to the house.

Sep 5, 2019

NBlog Sept 5 - right to repair vs IPR

This week I've been contemplating the right to repair movement, promoting the idea that consumers and third parties (such as repair shops) should not be legally denied the right to meddle with the stuff they have bought - to diagnose, repair and update it - without being forced to go back to the original manufacturer (a monopolistic constraint) or throw it away and buy a replacement.

Along similar lines, I am leaning towards the idea that products generally ought to be repairable and modifiable rather than disposable. That is, they should be designed with ‘repairability’ as a requirement, as well as safety, functionality, standards compliance, value, reliability and what have you. I appreciate that miniaturization, surface mounting, multi-layer PCBs, flow soldering and robotic parts placement make modern day electronic gizmos small and cheap as well as tough to repair, but obsolescence shouldn’t be built-in, deliberately, by default. Gizmos can still have test points, self-testing and diagnostics, with replaceable modules, with diagrams, fault-finding instructions and spare parts.

The same consideration applies, by the way, to proprietary software and firmware, not just hardware. Clearly documented source code, with debugging facilities, 'instrumentation' and so on, should be available for legitimate purposes - checking and updating the information security aspects for instance.

On the other hand, there are valuable Intellectual Property Rights to protect, and in some cases 'security by obscurity' is a valid though fragile control. 

Perhaps it's appropriate that monopolistic companies churning out disposable, over-priced products to a captive market should consider their intellectual property equally disposable.  Perhaps not. Actually I think not because I believe the concept of IPR as a whole trumps the greed of certain tech companies. 

The real problem with IPR, as I see it, is China, or more specifically the Chinese government ... and I guess the Chinese have a vested interest in disposability. So that's a dead end then.

Sep 4, 2019

NBlog Sept 4 - intelligent response

Among other things, the awareness seminars in September's NoticeBored module on hacking make the point that black hats are cunning, competent and determined adversaries for the white hats. In risk terms, hacking-related threats, vulnerabilities and impacts are numerous and (in some cases) substantial - a distinctly challenging combination. As if that's not enough, security controls can only reduce rather than completely eliminate the risk, so despite our best efforts, there's an element of inevitability about suffering harmful hacking-related incidents. It's not a matter of 'if' but 'when'.

All very depressing.

However, all is not lost. For starters, mitigation is not the only viable risk treatment option: some hacking-related risks can be avoided, while insurance plus incident and business continuity management can reduce the chances of things spiraling out of control and becoming critical, in some cases literally fatal.

Another approach is not just to be good at identifying and responding effectively to incidents, but to appear strong and responsive. So, if assorted alarms are properly configured and set, black hat activities that ought to trigger them should elicit timely and appropriate responses ... oh but hang on a second. The obvious, direct response is not necessarily appropriate or the best choice: it depends (= is contingent) on circumstances, implying another level of information security maturity.

'Intelligent response' is a difficult area to research since those practicing it are unlikely to disclose all the details, for obvious reasons. We catch little glimpses of it in action from time to time, such as bank fraud systems blocking 'suspicious' transactions in real time (impressive stuff, given the size and number of the haystacks in which they are hunting down needles!). We've all had trouble convincing various automated catchpas that we are, in fact, human: there the obvious response is the requirement to take another test, but what else is going on behind the scenes at that point? Are we suddenly being watched and checked more carefully than normal? Can we expect an insistent knock at the door any moment? 

In the spirit of the quotation seen on the poster thumbnail above, I'm hinting at deliberately playing on the black hats' natural paranoia. They know they are doing wrong, and (to some extent) fear being caught in the act, all the more so in the case of serious incidents, the ones that we find hardest to guard against. Black hats face information risks too, some of which are definitely exploitable - otherwise, they would never end up being prosecuted or even blown to smithereens. That means they have to be cautious and alert, so a well-timed warning might be all it takes to stop them in their tracks, perhaps sending them to a softer target.

Network intrusion detection and prevention systems are another example of this kind of control. Way back when I was a nipper, crude first-generation firewalls simply blocked or dropped malicious network packets. Soon after, stateful firewalls came along that were able to track linked sequences of packets, dealing with fragmented packets, sequence-out-of packets and so on. Things have moved on a long way in the intervening decades so I wonder just how sophisticated and effective today's artificial intelligence-based network and system security systems really are, in practice, for those who can afford them anyway. Do they have 'unpredictability' options with 'randomness' or 'paranoia' settings?

Sep 3, 2019

NBlog Sept 3 - principles, axioms and policies


ISO/IEC 27001:2013 section 5.2 is normally interpreted as the top layer of the ‘policy pyramid’. 

As with all the main body text in ‘27001, the wording of clause 5.2 is largely determined by:
(a) ISO/IEC JTC1 insisting on commonality between all the management systems standards, hence you’ll find much the same mandated wording in ISO 9000 and the others; and
(b) the need to spell out reasonably explicit, unambiguous ‘requirements’ against which certification auditors can objectively assess compliance.

Personally, when reading and interpreting clause 5.2, I have in mind something closer to “strategy” than what information security pro's would normally call “policy” - in other words a visionary grand plan for information risk and security that aligns with, supports and enables the achievement of the organization’s overall business objectives. That business drive is crucial and yet is too often overlooked by those implementing Information Security Management Systems, partly because '27001 doesn't really explain it. The phrase "internal and external context" is not exactly crystal clear ... but that's what the JTC1 directive demands.

In our generic (model, template) corporate information security policy, we lay out a set of principles and axioms for information risk and security e.g.:
Principle 1. Our Information Security Management System conforms to generally accepted good security practices as described in the ISO/IEC 27000-series information security standards.
Principle 2.   Information is a valuable business asset that must be protected against inappropriate activities or harm, yet exploited appropriately for the benefit of the organization.  This includes our own information and that made available to us or placed in our care by third parties.
and
Axiom 1:       This policy establishes a comprehensive approach to managing information security risks.  Its purpose is to communicate management’s position on the protection of information assets and to promote the consistent application of appropriate information security controls throughout the organization.  [A.5.1]
Axiom 2:     An Information Security Management System is necessary to direct, monitor and control the implementation, operation and management of information security as a whole within the organization, in accordance with the policies and other requirements.  [A.6.1]
As you might have guessed from those [A. …] references, the axioms are based on the controls in Annex A of ‘27001. We have simply rephrased the control objectives in ‘27002 to suit the style of a corporate policy, such that the policy is strongly linked to and aligned with ISO27k. Those reading and implementing the policy are encouraged to refer to the ISO27k standards for further details and explanation if needed. 

There is a downside to this approach however since there are 35 axioms to lay out, making the whole generic policy 5½ pages long. I'd be happier with half that length. Customers may not need all 35 axioms and might review and maybe reword, revise and combine them, hopefully without adding yet more. That's something I plan to have a go at when the generic policy is next revised.

The principles take things up closer to strategy. This could be seen as a governance layer, hence our first principle concerns structuring the ISMS around ISO27k. It could equally have referred to NIST's Cyber Security Framework, COBIT, BMIS or whatever: the point is to make use of one or more generally accepted standards, adapting them to suit the organization's needs rather than reinventing the wheel.

I find the concept of information risk and security principles fascinating. There are in fact several different sets of principles Out There, often incomplete and imprecisely stated, sometimes only vaguely implied. Different authors take different perspectives to emphasize different aspects, hence it was an interesting exercise to find and elaborate on a succinct, coherent, comprehensive set of generally-applicable principles. I'm pleased to have settled on just 7 principles, and these too will be reviewed at some point, partly because the field is moving on. 

Meanwhile, further down the policy pyramid, a set of classical security policies covers a wide range of topics in more detail, supporting and expanding on those high-level axioms in the overall context of the principles. '27001, refers to such policies in A.5.1.1:
"A set of policies for information security shall be defined, approved by management, published and communicated to employees and relevant external parties."
ISO/IEC 27002 section 5 expands on that succinct guidance with more than a page of advice. ISO/IEC 27003 is not terribly helpful in respect of the topic-specific policies but does a reasonable job of explaining how the high level/corporate security policy aligns with business objectives.