A pal put me on to the work of David Slater concerning the validity of risk matrices, heat maps and PIGs (Probability Impact Graphs).
Google found a paper by Ben Ale and David Slater on "Risk matrix basics" published in 2012 (I think) at RiskArticles.com discussing the mathematical theory behind different kinds of PIG e.g. whether the axes are linear or logarithmic, and whether the probability axis is or is not cumulative (giving a Complementary Cumulative Distribution Function, apparently).
The introduction refers to financial, environmental, health and safety, project and engineering risks. In those domains, there is a wealth of risk data concerning the frequencies of incidents and the costs, returns etc. collected over hundreds of years in relatively stable markets. However, in information risk, we're working with a paucity of data in a field that is rapidly evolving ... which is part of the reason I'm still dubious about mathematical/scientific approaches to information risks, especially concerning new technologies.
The authors acknowledge the widely appreciated value of PIGs in decision-making:
"So far we have concentrated on the historical development and original intent of Probability Impact Graphs (PIGs). We have seen that they do have a legitimate mathematical basis and that their utilization without awareness of the 'rules' can be at best misleading and at worst disastrous. But the main driver for their continued use is that, as a way of assessing the relative positioning of identified risks (from the Risk Register), in terms of qualitative seriousness (notional relative imminence and scale?), it has proved useful in stimulating discussion, awareness and even action from non specialist, but crucial decision makers in an organization."
In practice, the Analog Risk Assessment method is a useful way to analyze and communicate information risks. It works nicely as an awareness-raising and decision-support tool. The fact that the axes have no explicit scales (other than low to high) and the graph has no boxes, is an advantage in that it avoids those distractions, letting us focus on the risks - describing them, understanding them and ranking them relative to each other on both probability and impact, then deciding how to treat them. Mathematical precision is not needed in that application ... in fact I could go further in suggesting that (apart from a few areas where we do have the data) precise numbers i.e. specific values, defined ranges or confidence limits could materially misrepresent the risks and so mislead decision makers. The way we interpret and deal with a risk that is "about here in the amber zone" is not the same as one that we believe has a given probability and impact.