Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Feb 10, 2015

63,000 data points

The 2014 Data Breach Investigations Report (DBIR) by Verizon concerns more than ~63,000 incidents across 95 countries that were investigated by 50 organizations, including Verizon of course.

Fair enough ... but what exactly qualifies as an "incident"?  According to the report:
  • Incident: A security event that compromises the integrity, confidentiality, or availability of an information asset. 
  • Breach: An incident that results in the disclosure or potential exposure of data. 
  • Data disclosure: A breach for which it was confirmed that data was actually disclosed (not just exposed) to an unauthorized party.
Those definitions are useful, although for various reason I suspect that the data are heavily biased towards IT (a.k.a. "cyber") incidents. 

~1,300 of the ~63,000 incidents were classified as breaches - an interesting metric in its own right: ~98% of incidents evidently did not result in the disclosure or potential exposure of data. For the beleaguered Chief Information Security Officer or Information Security Manager, that's a mixed blessing. On the one hand, it appears that the vast majority of incidents are being detected, processed, and presumably stopped in their tracks, without data exposure. Information security controls may have failed to prevent the 63,000 incidents, but it appears they did prevent 98% of them becoming actual breaches. That's cause for celebration, isn't it?

On the other hand, however, the 2% of incidents that actually did disclose or expose data clearly represent far more serious business impacts. Figuring out whether incidents are trivial and stoppable or are likely to become breaches is difficult at the time, hence there is little option but to respond to all incidents by default as if they are serious, resulting perhaps in a blasé attitude. 

Worse still, there is a distinct possibility that significantly more than 2% of the incidents were in fact breaches but were either not recognized or not acknowledged as such. The 2% represent abject failures of information security - hardly something that the CISO or ISM is going to admit! If they are responsible for reporting the associated metrics, these figures are dubious.  [I suspect a substantial proportion of the incidents classified as breaches were so classified because of the involvement of auditors and other independent parties, including customers and other business partners who were directly impacted and 'made a fuss'. I wonder how many purely internal breaches - breaches involving confidential business information/trade secrets as opposed to credit card numbers - were simply hushed-up and don't appear in the breach or data disclosure numbers? We won't find out from the 2014 DBIR, and to be fair we ]

Turning now to figure 16 in the report, I'm fascinated by the patterns here:


Take the second and third categories, for instance. Web App Attacks led to more than a third of breaches, yet represented only 6% of incidents. That tells me we have a serious problem with vulnerabilities in web applications. Conversely, although 18% of incidents were due to insider misuse, they caused only 8% of actual breaches - in other words, less than half of them caused real damage. The other categories in these graphs are equally interesting. Look at "cyber-espionage" for instance: only 1% of incidents caused nearly a quarter of the breaches!  [Contrary to what I said earlier, this seems to indicate that "cyber-espionage" is in fact being reported after all. Further, it points to the difficulties of being a CISO/ISM responsible for responding to and stopping such attacks, even though they are such a tiny fraction of incidents.]