Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Nov 20, 2006

FAIR point

Alex Hutton runs a weblog for IT risk geeks focusing on the FAIR (Factor Analysis of Information Risk) risk analysis method that his employer Risk Management Insight LLC (RMI) promotes. In his blog, Alex takes me to task for my previous blog entry about the FFIEC, FAIR enough, and complains that the NoticeBored blog only accepts comments from authenticated users. Well OK, I've relaxed the restriction to encourage more feedback although I'll be moderating comments, not to censor what people say but merely to block spam. [Alex: why didn't you email me? Gary@isect.com]
Meanwhile, I took another look at FAIR. What follows is a rather harsh and cynical critique of the FAIR method as described in the undated draft FAIR white paper, partly because the paper's author, Jack Jones, invites comment towards the end of the document: "It isn’t surprising that some people react negatively, because FAIR represents a disruptive influence within our profession. My only request of those who offer criticism is that they also offer rational reasons and alternatives. In fact, I encourage hard questions and constructive criticism". So here goes.
Right away, I was intrigued by a statement at the front of the paper regarding it being a "patent pending" method that commercial users are expected to license. Unless I'm mistaken (which is entirely possible!), "patent pending" means "not patented" i.e. it is not currently protected by patent law, or else it would presumably be labelled "Patented" and give a patent number. Judging by the content of the introductory paper, FAIR appears to be a conventional albeit structured risk analysis method so I'm not clear what aspect of it would be patentable in any event. [My snake oil-o-meter starts quivering around the 5% mark at this point.]
"Be forewarned that some of the explanations and approaches within the FAIR framework will challenge long held beliefs and practices within our profession. I know this because at various times during my research I’ve been forced to confront and reconcile differences between what I’ve believed and practiced for years, and answers that were resulting from research. Bottom line – FAIR represents a paradigm shift, and paradigm shifts are never easy." [Not only is it claimed to be patentable, but it's a paradigm shift no less! The snake oil-o-meter heads towards 10%.]
The paper defines risk as "The probable frequency and probable magnitude of future loss". [Strictly speaking, risk includes an upside too, namely the potential for future gain which FAIR evidently ignores.] FAIR considers six specific forms of loss: productivity, response, replacement, fines/judgments, competitive advantage and reputation. [Management's loss of confidence in any system of controls that fails is evidently not considered in FAIR - in other words, there is an interaction between management's risk appetite and their confidence in the control systems they manage, based on experience and, most importantly, perception or trust. These apparent omissions hint at what could potentially be a much more significant problem with the method: there is no clear scientific/academic basis for the model underpinning the method, meaning that there are probably other factors that are not accounted for. Obtuse references to complexity and this being an "introduction" to details that are to be descibed in training courses etc. further imply incompleteness in the model and hence limited credibility for the method. Snake oil-o-meter jumps to half way.]
"A word of caution: Although the risk associated with any single exposure may be relatively low, that same exposure existing in many instances across an organization may represent a higher aggregate risk. Under some conditions, the aggregate risk may increase geometrically as opposed to linearly. Furthermore, low risk issues, of the wrong types and in the wrong combinations, may create an environment where a single event can cascade into a catastrophic outcome – an avalanche effect. It’s important to keep these considerations in mind when evaluating risk and communicating the outcome to decision-makers." [I wonder whether FAIR adequately describes risk aggregation, possible geometric increase and the "avalanche effect" noted here, in scientific terms? The author accepts the need to take such things into account but does FAIR actually do so? Snake oil-o-meter swings wildly around the half way point.]
[FAIR looks to me like a practitioners' reductionist model, something they have thought about and documented on the basis of their experience in the field as a way to describe the things they feel are important. FAIR might *help* an experienced information security professional assess risks but I'm not even entirely sure of that: the method looks complex and hence tedious (=costly) to perform properly. I wonder whether, perhaps, the FAIR method should be applied by a consultant such as someone from, say, RMI? Snake oil-o-meter settles around two-thirds full scale.]
To give him his due, the author acknowledges some potential criticisms of FAIR, namely: the "absence of hard data" and "lack of precision" (which are simply discounted as inherent limitations as if that settles the matter); the amount of hard work involved in such a complicated method (which is tacitly accepted with "it gets easier with practice"); "taking the mystery out of the profession" and resistance to "profound" change (I know plenty of information security and IT managers who would dearly like to find an open, sound, workable and reliable method.) [FAIR does not appear to be a profound change so much as an incomplete extension of conventional risk analysis methods. Snake oil-o-meter creeps up again.]
The appendix outlines a "basic risk assessment" in 4 stages: (1) "Identify scenario components" ("identify the asset at risk" and "identify the threat community under consideration"); (2) "Evaluate Loss Event Frequency (LEF)" ("estimate the probable Threat Event Frequency (TEF)", "estimate the Threat Capability (TCap)", "estimate Control strength (CS)", "derive Vulnerability (Vuln)" and "derive Loss Event Frequency (LEF)"); (3) "Evaluate Probable Loss Magnitude (PLM)" ("estimate worst-case loss" and "estimate probable loss"); and finally (4) "Derive and articulate Risk". Many of these parts clearly involve estimation, implying subjectivity (e.g. "estimate probable loss" could range between zero to total global destruction in certain scenarios: the appendix does not say how we are meant to place a mark on the scale). The details of the method are not fully described. For example, the TEF figure seems to be obtained by assigning the situation to one of a listed set of categories "Very High (VH): > 100 times per year"; "High (H): Between 10 and 100 times per year"; "Moderate (M): Between 1 and 10 times per year"; "Low (L)" Between .1 and 1 times per year"; or "Very Low (VL) < .1 times per year (less than once every ten years)." Similarly, the category boundaries for worst case loss vary exponentially for no obvious reason ($10,000,000; $1,000,000; $100,000; $10,000; $1,000; $0). [The use of two dimensional matrices to determine categories of 'output' value based on simple combinations of two 'input' factors is reminiscent of two-by-two grids favored by new MBAs and management consultants everywhere. The rational basis for using these nonlinear scales and the consequent effects on the derived risk probability are not stated, nor is method for determining the appropriate category under any given circumstances (how do we know, for sure, which is the correct value?). This issue strikes at the very core of the "scientific" (theoretically sound, objective, methodical and repeatable) determination of risk. The snake oil-o-meter rises rapidly towards three-quarters.]
The appendix casually includes a rather worrying statement: "This document does not include guidance in how to perform broad-spectrum (i.e., multi-threat community) analyses." [Practically all real-world applications for risk analysis methods necessarily involve complex real-life situations with multiple threats, vulknerabilities and impacts. It is not clear whether the full FAIR method can cope with anything beyond the very simplest of cause-effect scenarios. Snake oil-o-meter creeps up to 80%]
To close this critique, I'll return to a comment at the start of the FAIR paper that information risk is just another form of risk, like investment risk, market risk, credit risk or "any of the other commonly referenced risk domains". [The author fails to state, though, that risk is not managed 'scientifically' in these domains either. Stockbrokers, traders, insurance actuaries and indeed managers as a breed use, but cannot be entirely replaced by, scientific methods and models. Their salaries pay for their ability to make sound decisions based on expertise, meaning experience combined with gut feel - clearly subjective factors. Successful ones are inherently good at gathering, assessing and analysing complex inputs in order to derive simple outputs ("Buy Esso, sell BP" or "Your premium will be $$$"). From the outset, it seems unlikely the method will meet its implied objective to develop a scientific approach. Being based on "development and research spanning four years" is another warning sign since risk analysis in general and information security risk in particular, have been studied academically for decades. Although this is an 'introductory' paper with some strong points, it is quite naive in places. The snake oil-o-meter peaks out at around 80%.]

More risk management links

2 comments:

  1. Gary,

    First, thanks for taking the restrictions off commenting. And thank you for generously offering your email address. It was easier at the time for me to just blog rather than go through an "info" account (or call your office in the middle of the night -grin-). In addition, your sentiment about government guidance has been echoed online, and I wanted to use the opportunity to relay my experiences helping Fortune 100 banks with the guidance, the results, and what I've seen in information sharing sessions between the banks. My experience generally contradicts (in the way the blog post mentions) your assertions.

    Speaking of FAIR, I also enjoyed the critique. I really appreciate the points you bring up - "Snake-Oil" is a little cynical, but given my experience with security vendors the past 10 years, I don't blame you. Hopefully the rest of this post will help reduce the meter a bit.

    First, concerning the patent - I can't say much right now, but part of my charter is to open FAIR. Right now I've got my eyes on a Creative Commons license and an international standards body who seem to be rather enthused at the prospect of adopting FAIR as their risk management methodology. Hopefully, we'll be able to release intellectual control of FAIR before years end, and if that doesn't come to pass, I hope you'll press me about it here on your site, at my email address (alexh-at-riskmanagementinsight-d.o.t.com), and on my weblog.

    Second, I have to agree that "paradigm shift" is a really poor choice of words - it does reek of marketing-speak. However, after using FAIR for over a year, knowing how it's helped very large F.I.'s, and subsequently re-reviewing NIST, FRAP, OCTAVE, etc... in light of how I now perceive risk, FAIR (as I know it) is pretty different. I have to admit that a large part of the reason for the difference is due to concepts not covered in the initial white paper (application of better statistical methods than the matrices, application of risk measurement throughout an object model of a specific business process, etc...), but the foundation for all of this new application is the taxonomy present in FAIR. After a year of evangelism, I can tell you that the taxonomy and understanding how the factors of risk affect each other do "challenge" traditional security pros (oddly enough - it's the daily practitioners that get it first, consultants have the toughest time coming around).

    "[Strictly speaking, risk includes an upside too, namely the potential for future gain which FAIR evidently ignores.]"

    This depends on which dictionary definition of "risk" you choose to use. We started with "exposure to loss" out of the dictionary, so gain didn't enter into it. Let me also say that if FAIR ignores potential gain, then it is because usually it's the job of the business owner and not the CISO to perform that comparison. However, it would be simple (if you wish) to compare potential gain with potential risk. For example, let's say we work at a brokerage house with a strict policy against wireless (802.11x) access. Suddenly, a business reason for a policy exception were to happen - we could use FAIR to measure the probable losses for a policy exception, and compare that against the expected gains for the wireless deployment.

    "Management's loss of confidence in any system of controls that fails is evidently not considered in FAIR - in other words, there is an interaction between management's risk appetite and their confidence in the control systems they manage, based on experience and, most importantly, perception or trust."

    Loss of confidence is not a monetary loss, per se, that would contribute to the magnitude of a probable loss in the present state. You could, however, account for that loss in "response" or "replacement" costs, whatever it would take to restore mgmt's confidence. It's worth noting that we do teach that management's risk tolerance is a significant factor to consider (I'm sure you can appreciate that there's only so much you can cover in a 70 page white paper). We just don't see the effect of that tolerance where you might be putting it. Either way, once FAIR is open - I encourage you to create as much in derivative works as you'd like.


    "These apparent omissions hint at what could potentially be a much more significant problem with the method: there is no clear scientific/academic basis for the model underpinning the method, meaning that there are probably other factors that are not accounted for. Obtuse references to complexity and this being an "introduction" to details that are to be descibed in training courses etc. further imply incompleteness in the model and hence limited credibility for the method."

    Oh, I wouldn't say that. It's more like we've been working like dogs to get a company off the ground without seeking VC (we made a conscious decision NOT to seek VC as we anticipated opening FAIR and didn't want to have to reconcile our desire to have free adoption conflict with someone we owed significant money to). Thus we have been too busy to write the book that fleshes out the consequences of a proper IT risk taxonomy in a worthy manner. If you can find other risk factors that are unaccounted for, I'd really like to hear about them - and if that involves "opening the kimono" on other RMI I.P. to your blog readers, so be it. At the end of the day the framework is most important.

    If credibility is what you're seeking, then Jack's award from RSA this year, adoption by three Fortune 100 companies and two of the dozen or so US Universities with InfoSec MBA's, and independent verification by Ohio State Universities statistical department on how we use probability methods might help. I also hope that opening the framework and releasing it to an international standards body will help you reduce our Snake-Oil meter, as well.

    "I wonder whether FAIR adequately describes risk aggregation, possible geometric increase and the "avalanche effect" noted here, in scientific terms? The author accepts the need to take such things into account but does FAIR actually do so?"

    I think it's important to note that FAIR, as a taxonomy or framework, is somewhat agnostic to what types of calculations you want to use on it (this is one reason to open the framework, if you can make the math work better we encourage you to share). The White Paper, as an "introduction" only shows one method - considering population distributions and applying the results of comparing those estimations. Risk aggregation, defense in depth, accounting for "black swans" are all topics we're working on - but until we've got something that we can have verified independently by the scholastic community, we're not going to rush to release a half-baked approach. Maybe this will happen much faster when opened, I don't know.

    "FAIR looks to me like a practitioners' reductionist model, something they have thought about and documented on the basis of their experience in the field as a way to describe the things they feel are important."

    A reductionist model might be a good way of putting it, I'm not sure. The genesis of FAIR came to Jack in his work as CISO for Nationwide. It was only after he found out that the IP was his (and not his employers) did he even think about releasing it - if only because it served him so well. Turns out that the patent language is only there because Jack received advice to put it there until he properly understood what he should do. If the inventor bungles the beginning of his invention a bit, I think it can be expected and excused.

    "FAIR might *help* an experienced information security professional assess risks but I'm not even entirely sure of that: the method looks complex and hence tedious (=costly) to perform properly." I wonder whether, perhaps, the FAIR method should be applied by a consultant such as someone from, say, RMI?"

    Actually, once you get the hang of how the factors interrelate, most analysis can be done in 5 minutes or so. The devil is in the documentation. Yes, RMI does sell software to facilitate that process, but we'd like to pay for further research on the framework somehow. RE: consulting, Note that the white paper and FAIR itself were developed prior to the incorporation of RMI. But yes, we do consulting. However, our business model is not to be fishers, but to teach others to fish, and sell them our fishing pole if they like it. Jack built Crowe Chizek's InfoSec consulting arm, I worked at MicroSolved from 2001-2005, but we both decided that we didn't want to build another consultancy. If we never have to take another consulting SOW again and could spend all our time on research, training, and software development, that would be fine by us.

    "FAIR does not appear to be a profound change so much as an incomplete extension of conventional risk analysis methods."

    Take a second look at Vulnerability. If how we use Vuln. doesn't launch you into a diatribe about why people shouldn't use FAIR, then hopefully it will get you to admit that it's significantly different than what you'll find in NIST 800-30, OCTAVE, FRAPP, etc... and has significant benefit over how most practitioners define and measure vulnerability.

    Re: subjectivity. Well said. It was this fact that almost aborted FAIR in the first place. Interesting story as to why FAIR lived on. Jack felt very uncomfortable with the subjective aspects of FAIR. We know that we just can't get good, reliable data for the most part - so there needed to be a way to compensate. Jack spoke with many very smart folks in actuarial sciences (he was at Nationwide, after all). At one point, he got a chance to sit down with the Sr. VPs of A.S. When he expressed his concerns re: subjectivity, they laughed. To them, subjectivity and objectivity is not a binary state, but a spectrum. Their response to us went somewhat like this:

    "As long as human hands collect, record, and interpret data - there is subjectivity involved. We'll never escape that. The best you can do is try to drive as much objectivity into your models as possible."

    And that's they really liked about FAIR.

    It's also important to note that we intentionally try to stay away from any implication that FAIR or risk analysis/modeling is a science/scientific. If you'll point out to me what pieces of the document implied that we claim that, I'll have them reworded immediately. FAIR is not science. IT Risk is not scientific. IMHO, at best we're para-science like economics or meteorology.

    "Being based on "development and research spanning four years" is another warning sign since risk analysis in general and information security risk in particular, have been studied academically for decades."

    And the result has been....? I encourage you to take a look at the risk assessment methods you've linked. Having studied most of them, I'd be happy to point out where they fail and where I think FAIR addresses the failure. Jack spent a year trying just about any risk analysis/assessment/management documentation he could find, and found them lacking. We sincerely believe that FAIR is significantly better, and hope you will, in practice and use, compare it to other methods.

    Thank you again for your comments - sometimes it's easy to impart snarkiness in tone and writing in a discussion like this - know that it's not my intention to be ill-natured or ill-mannered.

    ReplyDelete
  2. Hi Alex,

    Well done for making such a spirited, thorough and quick response! I'm more than happy to publish that on the blog, and relieved that you took my comments so positively. I admit I was in an especially cynical frame of mind this morning but you have put my mind at ease on most points. Perhaps I've misunderstood something. I thought FAIR was being promoted essentially as a scientific or rational method: I'll have another read through the paper and see if I can support that claim, otherwise I must apologise.

    I too have used a number of methods over the years, most recently achieving some success with a purely subjective method based on quiet contemplation, mind-mapping and discussion with information security colleagues ahead of a risk workshop involving information security, risk management, SOX/compliance experts and a bunch of business people from the areas under review. Our aim was not to undertake a comprehensive risk assessment, or to put numbers on the probabilities and impacts, but rather to tease out the obvious and less-obvious information security risks that deserved to be addressed in that situation. I'll probably write it up as a case study one day: whilst the information security results were quite interesting in themselves, the risk analysis process itself was absolutely fascinating and a lot of fun to boot. Golly: "information security", "risk analysis" and "fun" in the same sentence!

    G.

    ReplyDelete