Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Jul 9, 2012

SMotW #14: logical access reviews

Security Metric of the Week #14: days since logical access control matrices for application systems were last reviewed

The fundamental premise for this metric is that if the logical access controls within [shared] application systems are reviewed more often, they are more likely to reflect the corresponding business requirements (for example, taking account of the differing roles that people play, the comings-and-goings from the workforce, changes in business processes and IT systems over time, and compliance with the access rules defined by information asset owners).  In other words application systems are more secure than if the access reviews are infrequent or nonexistent. 


In relation to Acme Inc., we thought this metric might be used and reported 'per application system', measuring how effectively access rights are being managed for each system.  Let's discuss the PRAGMATIC numbers in this context:
P
R
A
G
M
A
T
I
C
Score
55
80
95
30
80
85
60
70
80
71%




The metric scores strongly in terms of being Actionable: clearly, if the access rights have not been checked for a long time, they ought to be checked soon.  But how long is 'a long time'?  It is implied that review periods are for various classes or types of systems, and measuring discrepancies between actual and target dates in days: some matrices will have been reviewed on or before their targets, others will be overdue.  We presume that the raw data - target and actual dates - may be read from some sort of inventory of the systems, populated in turn from the access matrix review and update procedures.  This metric also scores highly in terms of Accuracy, given that the number of days between target and actual dates are readily measured and checked, although there is still a practical question of determining the date an access matrix is declared ‘fully reviewed’.  It is quite conceivable that this date might be deliberately manipulated by someone seeking to improve their score (for example conducting sham reviews), so the metric is rated down on Genuineness, although the metric should be based on verifiable facts (namely dates taken from evidence of the reviews), hence its Independence or Integrity score is not too bad.


On Predictability, the metric rated just over 50% because although application access controls are quite important for information security, they are clearly  insufficient to constitute a complete or comprehensive security approach.  Many other forms of control are needed to secure an organization's information.  However we were quite generous on the score because, in our experience, organizations that are sufficiently on top of their application security to review the access rights regularly tend to be generally competent and mature at information security - in other words, a high score on this specific metric is, we feel, an indicator of a reasonable level of information security.

The metric itself is relatively straightforward and simple to understand, hence it scores well for  Meaning.  It didn't quite merit 100% because, as presently worded, it begs questions ranging from 'What are logical access control matrices?' and 'What are application systems?' to 'How many systems are there?', 'Who does the reviewing?' and 'How do they review - what is a review anyway?'   If the metric was clarified on issues such as these, it should be possible to drive its score up to the point that it becomes a serious contender as a corporate security metric, but its ultimate fate depends on whether there are other, better security metrics for the security aspects that matter most to the organization.


Whereas we have been thinking about using the metric in the context of individual application systems, it could also be used to measure, report and track days-since-last-reviewed across the entire portfolio of applications throughout the organization.  Doing so would highlight those application systems whose access rights have never been reviewed since the day they were implemented - a surprisingly common occurrence in practice.  Provided there are suitable controls in place to prevent people ignoring or discounting specific application systems simply in order to improve the metric, it will in effect pressure management into conducting reviews on the worst-scoring application systems.  The days-since-last-reviewed numbers from application systems might even be weighted to reflect the relative importance of access controls to those systems, putting yet more emphasis on the applications most in need of review, but the additional complexity, subjectivity and Costs of such an approach would not help its PRAGMATIC score.


Once again this week, we've demonstrated how the PRAGMATIC approach leads us to delve below the superficial appearance of a candidate security metric, consider its strengths and weaknesses, and identify specific areas where the metric might be improved.  We hope you are starting to see the power of being PRAGMATIC.