Today I'm writing about 'security assurance metrics' for April's NoticeBored module.
One aspect that interests me is measuring and confirming (being assured of) the correct operation of security controls.
Such metrics are seldom discussed and, I suspect, fairly uncommon in practice.
Generally
speaking, we infosec pros just love measuring and reporting on incidents and stuff
that doesn't work because that helps us focus our efforts and justify investment in the controls we believe are necessary. It also fits our natural risk-aversion. We can't help but focus on the downside of risk.
Most of us blithely assume that, once operational, the security controls are doing their
thing: that may be a dangerous assumption, especially in the case of safety-,
business- or mission-critical controls plus the foundational controls on which
they depend (e.g. reliable authentication is a prerequisite for access control, and physical security underpins almost all other forms of control).
So, on the security metrics dashboard, what's our equivalent of the "bulb test" when well-designed electro-mechanical equipment
is powered up? How many of us have even
considered building-in self-test functions and alarms for the failure of critical
controls?
I could be wrong but I feel this may be an industry-wide blind spot with the exception of safety-critical
controls, perhaps, and situations where security is designed and built in from scratch as an integral part of the architecture (implying a mature, professional approach to security engineering rather than the usual bolt-on security).
No comments:
Post a comment