Welcome to the SecAware blog

I spy with my beady eye ...

6 May 2014

Enterprise Security Metrics report

A new 28-page research report by George Campbell's Security Executive Council (SEC) concerns the status of physical security metrics. Enterprise Security Metrics: A Snapshot Assessment of Practices (free but registration required) "provides a snapshot of the use of metrics in corporate security management. It includes information on the current state-of-the-art of various models of benchmarking and security metrics, types of metrics, judging the maturity of security metrics programs as well as challenges and opportunities for those undertaking security metrics programs. This report specifically summarizes our learned experience from corporate security measures and metrics initiatives."

The report refers to SEC's ongoing metrics research but unfortunately does not go into details about the methods.  A note on page 7 refers to a survey of 27 companies representing "a solid cross section of industry sectors [with] mature and multi-service corporate security programs, several engaging in best practice operations". The small sample was presumably drawn from members or clients of the SEC meaning that it was not random but self-selected from organizations with a clear interest in security metrics. Nevertheless, statistics aside, the findings and conclusions are well worth reading in more general terms - for example:
"Nearly 70 percent of respondents stated that they don’t collect security program metrics for the purposes of presenting to senior management ... This lack of engagement remains as a significant internal obstacle to metrics acceptance and development. Too many corporate security practitioners have either avoided or failed to understand the relevance of such measures. Security organizations have the data; they are willing to count events and other activity data but they apparently don’t see the need to use it to build actionable, influential metrics that can effectively influence senior management."
I like the phrase 'actionable, influential metrics'. Metrics that are neither actionable nor influential have little practical value. They are "coffee table metrics", the sorts of things one might idly skim through in a glossy magazine.  Metrics that are influential but not actionable can cause consternation: we know there is something wrong but we don't know how to fix it. Metrics that are actionable but not influential have no impact. Metrics that don't influence or support decisions are essentially pointless. Fow what it's worth, most such metrics tend to be cheap and easy to gather so the measurement costs are quite low, although there is a hidden cost in that they can be distracting, giving the impression that someone is on top of security metrics whereas in fact they are not.

The report mentions the commonplace KRIs and KPIs (Key Risk and Performance Indicators) plus two metrics that were new to me: Key Influence Indicators ("How do our metrics influence governance policy, business unit accountability and personal behavior?") and Key Value Indicators ("How have our metrics demonstrated tangible, actionable and measurable benefit to the enterprise?"). Influence and value are two of several characteristics of metrics, or metametrics. The PRAGMATIC method uses nine specific metametrics to determine the net value of a metric.

There is a common theme underlying the report's conclusions, namely that more effort should be put into identifying baseline metrics for all aspects of security in order to enable benchmarking comparisons between organizations. Security management practices and metrics requirements vary widely largely in practice because security risks vary widely, hence the particular security concerns that drive a given organization to select specific security metrics may not coincide with other organizations. However, an appendix to the SEC report offers a maturity metric measuring the status of an organization's [physical] security metrics program by assessing the anticipated parts of such a program. The metric is similar in style to those we described in PRAGMATIC Security Metrics, a form of metric that encourages us to break down and systematically assess complex situations within the organization (I found them well suited for internal audits and process improvement initiatives). Maturity metrics are also a promising approach for benchmarking comparisons of multiple organizations.

Another conclusion of the report is that metrics are needed for compliance assessment purposes: we discussed this point too in PRAGMATIC Security Metrics. Industry regulators and authorities (such as the other SEC!) need rational ways in which to measure and assess organizations on all sorts of criteria including governance, risk and security practices. The conventional approach is to specify and mandate certain requirements, in which case the measurement process boils down to someone (hopefully, a competent, independent and diligent third party) determining whether the stated requirements have or have not been fulfilled - fine in theory but harder to achieve in practice since there are so many variables. PCI-DSS, for instance, requires a number of specific security controls supposedly to secure cardholder data, and PCI assessments attempt to confirm that they are all in place. We know from Target and many other breaches that the PCI controls are imperfect, and that a "pass" on the PCI assessment does not necessarily mean that card holder data are in fact adequately secured. Furthermore, card holder data are just a fraction of most organizations' information assets, hence ticking the PCI compliance box does not necessarily mean the organization has adequate information security as a whole (it is an indicator, perhaps, but primarily concerns compliance with externally-imposed obligations). It would be practically impossible to extend the PCI-type approach to cover all of information security, physical security, risk management and governance, whereas other approaches such as maturity metrics could be used both to measure and to drive improvements.

George ends the report with a plea to collaborate with other metrics professionals. I welcome the initiative and will definitely get in touch! 

4 May 2014

Awareness module on "Surveillance"

Prompted by recent revelations about mass surveillance by the NSA, we wrote a brand new awareness module covering the information security issues relating to surveillance from two distinct perspectives: 

  1. Surveillance conducted by the organization on its employees and others;
  2. Surveillance conducted on the organization by the authorities and others.

We interpret 'surveillance' liberally to include activities such as monitoring employees' use of email, the networks, applications and the phones.  Surveillance is generally a side effect, not usually the main purpose of ICT monitoring, but nevertheless myriad system and network managers and security professionals have the data and the tools to analyze what users are up to and often where they are.  

CCTV is an everyday example of surveillance, and again the security pros watching those TV screens inevitably see lots of ordinary people quietly going about their lives, not just criminals, intruders, vandals, shoplifters and so on. Cutting-edge image analysis and pattern recognition software, along with ongoing improvements in the cameras, is fast taking surveillance to new levels with facial recognition and tracking of individuals purely from CCTV coverage being realistic possibilities for ordinary commercial organizations, not just the security services and police.

Privacy is a key concern with surveillance.  Those conducting surveillance are expected to comply with applicable laws and regulations.  At the same time, the subjects of surveillance have rights and expectations concerning their privacy, at times placing significant trust in the watchers.  In social terms, society is making tradeoffs between the costs and benefits of surveillance, while in some places ever more intrusive and comprehensive surveillance raises the spectre of Big Brother.