Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Nov 20, 2006

FAIR point

Alex Hutton runs a weblog for IT risk geeks focusing on the FAIR (Factor Analysis of Information Risk) risk analysis method that his employer Risk Management Insight LLC (RMI) promotes. In his blog, Alex takes me to task for my previous blog entry about the FFIEC, FAIR enough, and complains that the NoticeBored blog only accepts comments from authenticated users. Well OK, I've relaxed the restriction to encourage more feedback although I'll be moderating comments, not to censor what people say but merely to block spam. [Alex: why didn't you email me? Gary@isect.com]
Meanwhile, I took another look at FAIR. What follows is a rather harsh and cynical critique of the FAIR method as described in the undated draft FAIR white paper, partly because the paper's author, Jack Jones, invites comment towards the end of the document: "It isn’t surprising that some people react negatively, because FAIR represents a disruptive influence within our profession. My only request of those who offer criticism is that they also offer rational reasons and alternatives. In fact, I encourage hard questions and constructive criticism". So here goes.
Right away, I was intrigued by a statement at the front of the paper regarding it being a "patent pending" method that commercial users are expected to license. Unless I'm mistaken (which is entirely possible!), "patent pending" means "not patented" i.e. it is not currently protected by patent law, or else it would presumably be labelled "Patented" and give a patent number. Judging by the content of the introductory paper, FAIR appears to be a conventional albeit structured risk analysis method so I'm not clear what aspect of it would be patentable in any event. [My snake oil-o-meter starts quivering around the 5% mark at this point.]
"Be forewarned that some of the explanations and approaches within the FAIR framework will challenge long held beliefs and practices within our profession. I know this because at various times during my research I’ve been forced to confront and reconcile differences between what I’ve believed and practiced for years, and answers that were resulting from research. Bottom line – FAIR represents a paradigm shift, and paradigm shifts are never easy." [Not only is it claimed to be patentable, but it's a paradigm shift no less! The snake oil-o-meter heads towards 10%.]
The paper defines risk as "The probable frequency and probable magnitude of future loss". [Strictly speaking, risk includes an upside too, namely the potential for future gain which FAIR evidently ignores.] FAIR considers six specific forms of loss: productivity, response, replacement, fines/judgments, competitive advantage and reputation. [Management's loss of confidence in any system of controls that fails is evidently not considered in FAIR - in other words, there is an interaction between management's risk appetite and their confidence in the control systems they manage, based on experience and, most importantly, perception or trust. These apparent omissions hint at what could potentially be a much more significant problem with the method: there is no clear scientific/academic basis for the model underpinning the method, meaning that there are probably other factors that are not accounted for. Obtuse references to complexity and this being an "introduction" to details that are to be descibed in training courses etc. further imply incompleteness in the model and hence limited credibility for the method. Snake oil-o-meter jumps to half way.]
"A word of caution: Although the risk associated with any single exposure may be relatively low, that same exposure existing in many instances across an organization may represent a higher aggregate risk. Under some conditions, the aggregate risk may increase geometrically as opposed to linearly. Furthermore, low risk issues, of the wrong types and in the wrong combinations, may create an environment where a single event can cascade into a catastrophic outcome – an avalanche effect. It’s important to keep these considerations in mind when evaluating risk and communicating the outcome to decision-makers." [I wonder whether FAIR adequately describes risk aggregation, possible geometric increase and the "avalanche effect" noted here, in scientific terms? The author accepts the need to take such things into account but does FAIR actually do so? Snake oil-o-meter swings wildly around the half way point.]
[FAIR looks to me like a practitioners' reductionist model, something they have thought about and documented on the basis of their experience in the field as a way to describe the things they feel are important. FAIR might *help* an experienced information security professional assess risks but I'm not even entirely sure of that: the method looks complex and hence tedious (=costly) to perform properly. I wonder whether, perhaps, the FAIR method should be applied by a consultant such as someone from, say, RMI? Snake oil-o-meter settles around two-thirds full scale.]
To give him his due, the author acknowledges some potential criticisms of FAIR, namely: the "absence of hard data" and "lack of precision" (which are simply discounted as inherent limitations as if that settles the matter); the amount of hard work involved in such a complicated method (which is tacitly accepted with "it gets easier with practice"); "taking the mystery out of the profession" and resistance to "profound" change (I know plenty of information security and IT managers who would dearly like to find an open, sound, workable and reliable method.) [FAIR does not appear to be a profound change so much as an incomplete extension of conventional risk analysis methods. Snake oil-o-meter creeps up again.]
The appendix outlines a "basic risk assessment" in 4 stages: (1) "Identify scenario components" ("identify the asset at risk" and "identify the threat community under consideration"); (2) "Evaluate Loss Event Frequency (LEF)" ("estimate the probable Threat Event Frequency (TEF)", "estimate the Threat Capability (TCap)", "estimate Control strength (CS)", "derive Vulnerability (Vuln)" and "derive Loss Event Frequency (LEF)"); (3) "Evaluate Probable Loss Magnitude (PLM)" ("estimate worst-case loss" and "estimate probable loss"); and finally (4) "Derive and articulate Risk". Many of these parts clearly involve estimation, implying subjectivity (e.g. "estimate probable loss" could range between zero to total global destruction in certain scenarios: the appendix does not say how we are meant to place a mark on the scale). The details of the method are not fully described. For example, the TEF figure seems to be obtained by assigning the situation to one of a listed set of categories "Very High (VH): > 100 times per year"; "High (H): Between 10 and 100 times per year"; "Moderate (M): Between 1 and 10 times per year"; "Low (L)" Between .1 and 1 times per year"; or "Very Low (VL) < .1 times per year (less than once every ten years)." Similarly, the category boundaries for worst case loss vary exponentially for no obvious reason ($10,000,000; $1,000,000; $100,000; $10,000; $1,000; $0). [The use of two dimensional matrices to determine categories of 'output' value based on simple combinations of two 'input' factors is reminiscent of two-by-two grids favored by new MBAs and management consultants everywhere. The rational basis for using these nonlinear scales and the consequent effects on the derived risk probability are not stated, nor is method for determining the appropriate category under any given circumstances (how do we know, for sure, which is the correct value?). This issue strikes at the very core of the "scientific" (theoretically sound, objective, methodical and repeatable) determination of risk. The snake oil-o-meter rises rapidly towards three-quarters.]
The appendix casually includes a rather worrying statement: "This document does not include guidance in how to perform broad-spectrum (i.e., multi-threat community) analyses." [Practically all real-world applications for risk analysis methods necessarily involve complex real-life situations with multiple threats, vulknerabilities and impacts. It is not clear whether the full FAIR method can cope with anything beyond the very simplest of cause-effect scenarios. Snake oil-o-meter creeps up to 80%]
To close this critique, I'll return to a comment at the start of the FAIR paper that information risk is just another form of risk, like investment risk, market risk, credit risk or "any of the other commonly referenced risk domains". [The author fails to state, though, that risk is not managed 'scientifically' in these domains either. Stockbrokers, traders, insurance actuaries and indeed managers as a breed use, but cannot be entirely replaced by, scientific methods and models. Their salaries pay for their ability to make sound decisions based on expertise, meaning experience combined with gut feel - clearly subjective factors. Successful ones are inherently good at gathering, assessing and analysing complex inputs in order to derive simple outputs ("Buy Esso, sell BP" or "Your premium will be $$$"). From the outset, it seems unlikely the method will meet its implied objective to develop a scientific approach. Being based on "development and research spanning four years" is another warning sign since risk analysis in general and information security risk in particular, have been studied academically for decades. Although this is an 'introductory' paper with some strong points, it is quite naive in places. The snake oil-o-meter peaks out at around 80%.]

More risk management links