Many information security controls that are intended to mitigate significant business- and/or safety-critical information risks are themselves critical. If critical controls are missing, ineffective, fail in service, or are disabled (whether accidentally or deliberately), the associated risks are more likely to materialize, leading to unacceptable impacts. Therefore, relative to less- or non-critical ones, critical controls deserve additional investment and attention throughout their lifecycle.
For examples, critical controls should ideally be:
- Identified as such, implying that controls should be systematically measured as to their criticality, and ranked or categorized accordingly in order to identify the most critical ones that deserve additional effort;
- Carefully considered, specified and documented in detail;
- Designed, developed and tested thoroughly by experienced professionals, applying sound security principles such as defense-in-depth;
- Resilient and fail-safe or fail-secure in nature e.g. supported by additional controls to limit the damage and raise the alert if they were to weaken or fail;
- Authorized by senior management, provided they have sufficient assurance as to their effectiveness and suitability;
- Monitored routinely or continuously for effectiveness, triggering alerts/alarms at the earliest opportunity (wherever possible before serious incidents occur);
- Used and managed properly e.g. with extra checks to prevent the implementation of unauthorized or inappropriate changes that might harm or threaten them in some way;
- Tested, checked or audited more often and more thoroughly;
- Proactively maintained;
- Understood to be, and treated as, 'special' as in highly valuable and worth protecting.
That's all straightforward and obvious to me, yet I'm struggling to think of any standards, guidelines etc. in the information risk and security context that explicitly highlight the concept of control criticality.
Have I simply missed them? Or is this a blind spot for the profession?