Welcome to NBlog, the NoticeBored blog

You don't need eyes to see: you need vision

May 26, 2017

NBlog May 26 - Insecurity of Things sit-rep

We're turning the corner into the final straight for June's awareness module on IoT security:

I'll take some time off at the weekend, recharging my built-in lithiums ready for a photo finish next week. 

This module looks like it will go to the line on Wednesday May 31st ... and we may even need to refer to UTC rather than NZ time to hit our deadline, one of the advantages of being just to the West of the international date line.

Must go, things to do, awareness to raise.


May 25, 2017

NBlog May 25 - peeling tiddles

Ours is not the only subject area that benefits from awareness in a corporate context. Typical organizations run several awareness programs, initiatives or activities in parallel, hopefully covering information risk and security (or security, IT security, or cybersecurity, or whatever they call it) plus:
  • IT/tech awareness;
  • Privacy awareness, and other compliance awareness concerning both external legal/regulatory and/or internal policy/strategy obligations;
  • Health and safety awareness;
  • Project and change awareness (e.g. new business initiatives, new systems, new ways of working ...);
  • Commercial/business/corporate awareness;
  • Strategy/vision/values awareness;
  • Brand/marketing/competitor/industry awareness;
  • Risk awareness;
  • Fraud awareness;
  • Financial/accounting awareness;
  • Management awareness;
  • Human Resources awareness, including discrimination, employment practices, motivation, team working, violence in the workplace, disciplinary processes, capability development, stress management etc.

I've called them all "awareness" but in practice they may be known as "training" or "education" or "information" or "support" or "mentoring" or "competence enhancement". Aside from the obvious subject matter differences, they also vary in terms of:
  • The audiences (e.g. managers and/or staff, company-wide or specific sites, departments, teams or individuals);
  • The delivery mechanisms (e.g. courses, meetings, seminars, lectures, Intranet content, leaflets, one-on-one ...);
  • Formats and styles of material;
  • Push and/or pull (e.g. information gets disseminated out to the audience, or is available on request from audience members, or both);
  • The timing (e.g. one off, annual, quarterly, monthly, weekly, daily, ad hoc/sporadic);
  • The learning objectives (e.g. strict compliance may be a primary or secondary goal: there may be business or personal objectives too). 
So far, I've only mentioned the typical corporate environment but awareness is a far broader concern. For example, there are many government-led public awareness activities ongoing, most but not all relating to compliance (e.g. tax, speeding, health, schooling), and several industry, focus-group and commercial awareness activities (not least the enormously active field of marketing, advertising and promotion).

Thinking about the above, it's obvious that there are many ways to skin a cat and many cats to skin ... which hints at two approaches to advance the practice of security awareness:
  1. There are clearly loads of ideas out there on how to 'do' awareness with an enormous variety of approaches in use right now. A little research will reveal many nuances and variants, including ideas stemming from the underlying psychology of education, influence, motivation and coercion, and creative approaches (such as social media, a massive growth area for at least the past decade - this very blog for example). Would you consider exploring and maybe trying some of them out? If not, is that because you are stuck in the groove, doing the same old stuff time after time through habit or because you (or your boss and colleagues) lack imagination, or are there other reasons/excuses (such as lack of time and budget)? How about starting small with little changes, maybe experimenting with new formats or delivery processes?
  2. Many of the ongoing parallel awareness activities share common ground, hence they could usefully be aligned and coordinated to make the most of their pooled resources ... except this is very rare in practice: it's as if every awareness team or person is selfishly pursuing their own goal. Some even talk of 'competing for head space', making this a competitive rather than cooperative activity. Why is that? 

Coordinating and collaborating on awareness is something that fascinates me. In our own little way, we actively encourage customers to liaise with their professional colleagues who share an interest in the monthly topic - for example May's email security awareness topic is of directly interest and concern to the IT department. The idea of collaborating with awareness and training colleagues on a much broader level suggests forming and exploiting social networks, and tapping into other fields of interest such as advertising and education. Innovation is an excellent way to stave-off boredom and improve the effectiveness of your security awareness program


May 24, 2017

NBlog May 24 - the risk of false attribution

News relating to the WannaCry incident is still circulating, although a lot of what I'm reading strikes me as perhaps idle speculation, naive and biased reporting, politically-motivated 'fake news' or simply advertising copy.

Take for instance this chunk quoted from a piece in Cyberscoop under the title "Mounting evidence points to North Korean group for global ransomware attack":
"In the aftermath of a global ransomware attack, which impacted more than 300,000 computers in over 150 countries, a small, select group of security researchers announced they had found evidence suggesting a group previously linked to the North Korean government was likely behind the international cyber incident. Their theory gained new found credibility Monday when U.S. cybersecurity firm Symantec said it too discovered “strong links” between WannaCry ransomware and the so-called Lazarus Group."
Cybersecurity incidents such as WannaCry are often blamed on ("attributed to") certain perpetrators according to someone’s evaluation of evidence in the malware or hacking tools used, or other clues such as the demands and claims made. However the perpetrators of illegal acts are (for obvious reasons) keen to remain undercover, and may deliberately mislead the analysts by seeding false leads. Furthermore, attacks often involve a blend of code, tools and techniques from disparate sources, obtained through the hacking underground scene and used or adapted for the specific purpose at hand. 

It's a bit like blaming the company that made the nails used in the Manchester bombing for the attack.  No, they just made the nails.


May 22, 2017

NBlog May 22 - updating trumps writing from scratch

Ticks are rapidly infesting the contents listing as the Insecurity of Things awareness module falls into place.  

I've just updated the ICQ (Internal Controls Questionnaire - an audit-style checklist supporting a review of the organization's IoT security arrangements) that we wrote way back in August 2015 - eons ago in Internet time. On top of the issues raised then, we've come up with a few more (e.g. ownership of things plus the associated information risks and the health and safety implications in some cases). 

Updating the ICQ took about half an hour, whereas writing it from scratch in the first place must have taken several hours plus the research and prep time, neatly illustrating the value of NoticeBored. Customers are welcome actively encouraged to customize the materials to suit their circumstances and awareness needs, saving them many hours of time in the process - hopefully freeing them up to work on the awareness activities, such as delivering seminars, interacting face-to-face with their colleagues, explaining and expanding on the content in the specific context of their organizations.

It's a similar story with the FAQ. Using the 2015 version as a starting point, updating it for 2017 was straightforward, for instance replacing a paragraph on an early IoT security incident with a recent example. Job done in about 20 minutes ... and on to the next item on the virtual conveyor belt.

It doesn't work for everything though. I usually start the seminar slide decks from scratch, building up the story of the day. If I'm lucky, I might be able to re-use a few of the original slides, or at least the graphics and notes. Also, newly introduced types/formats of awareness material (such as the word clouds and puzzles) need to be prepared afresh.

Sometimes we re-scope a module, focusing on different angles or blending topics and further complicating matters for ourselves. On the upside, I'm easily bored so new challenges are invigorating, within reason anyway. The month-end delivery deadline can be a millstone.


May 21, 2017

NBlog May 21 - lame email scam

This plopped unceremoniously into my inbox today:

It's hard to imagine anyone falling for such a lame appeal ... but then perhaps the scammer's real aim was to be blogged about, and I've been phooled.

I presume neither "Gilda Ancheta" nor uhn.ca (the University Health Network based in Toronto, Canada, apparently) have anything to do with this email, especially as the reply-to address (not shown above but embedded in the email header) is [somebody]@rcn.com

I've forwarded the message to abuse@rcn.com.  Tag!


May 20, 2017

NBlog May 20 - more biometric woes

In the course of a routine eye checkup yesterday, the optician took and showed me high-definition digital images of both my retinas. Fascinating! 

This morning while in the dual-purpose creative thinking + showering cubicle, I idly wondered about the information risks. Could I trust the optician to have properly secured their systems and networks, and to have encrypted my retinal images to prevent unauthorized disclosure? If not, what impact might such disclosure cause, and what are the threats? 

I don't personally use retina-scanning biometric authentication, and I seriously doubt anyone would be desperate enough to steal and use my retinal images to clone my identity (given other much easier ways to commit identity fraud) so I'm not that fussed about it - it's a risk I'm willing to accept, not being entirely paranoid. 

I'm curious about the risk on a wider level though: are opticians and other health professionals adequately securing their systems, networks, apps and data? Do they even appreciate the issue? It's far from a trivial consideration in practice.

The risks would be different for people such as, say, Mr Trump who might actually be using retina or iris images or other biometrics for critically important authentication purposes. I wonder whether the associated biometric data security and privacy controls are any better for such important people, in reality? Do the spooks make the effort to check? What stops someone taking high-res close-up photos of Donald's iris or finger or palmprints, or high quality audio recordings of his voice, or video recordings of his gait and handwriting or typing, or picking up one of his hairs for DNA analysis, perhaps in the guise of the press corps, a doting fan or a close confidante? Inadvertent disclosure is an issue with biometrics, along with the fact that they cannot be changed (short of surgery) ... so the security focus shifts to preventing or at least detecting possible biometric forgeries and replays, taking us right back to the issue of false negatives that I brought up a few short hours ago.


May 19, 2017

NBlog May 19.1 - SHOCK! HORROR! Biometrics not foolproof!

A BBC piece about the fallibility of a bank's voice recognition system annoyed me this evening, with its insinuation that the bank is not just insecure but incompetent.

The twin journalists are either being economical with the truth in order to make a lame story more sensational, or are genuinely naive and unaware of the realities of ANY user authentication system. This is basic security stuff: authentication systems must strike a balance between false negatives and false positives. In any real-world implementation, there are bound to be errors in both directions, so the system needs to be fine-tuned to find the sweet spot between the two which depends, in part, on whether the outcome of false negatives is better or worse than for false positives.  It also depends on the technology, the costs, and the presence of various other, compensating controls which the journalists don't go into - little things such anti-fraud systems coupled with the threat of fraudsters being prosecuted, and the access controls that lead on from authentication.

Authentication errors or failures are just one of many classes of risks to a bank. The implication that the bank is hopelessly incompetent is, frankly, insulting to the professionals concerned. Does it not occur to the journalists that it's the bank's business since, to a large extent, they carry the costs of fraud, plus the control costs, plus having to deal with the customer aggravation that stronger controls typically cause?  

There is no recognition for the technical capability: voice recognition may not be cutting-edge but it is advanced technology, particularly given the crappy audio quality of most phone networks. Now there's an issue worth reporting on!

Trotting out a few carefully selected, doubtless out-of-context and incomplete statements from security experts doesn't help matters either. I bet they are seething too.

This is cheap journalism, well below the standard I've come to expect from Auntie.  It's not fake news, but the thin end of the same wedge.


NBlog May 19 - Insecurity of [sex] Toys

The Insecurity of Things awareness module is gradually taking shape, the staff stream in particular:

I have some ideas in mind for both the management and professional streams too, so the dearth of ticks there is not alarming.

A couple of the IoT security incidents I've come across concern hackers compromising smart sex toys, which creates a conundrum for the awareness program. Do we mention them because they are relevant and eye-opening cases, or do we ignore them because they may be inappropriate for some customers? On balance, I think we will cover them but delicately and in ways that customers can easily remove or skip them if they are deemed too contentious (politically incorrect) for corporate communications. As with the rest of the awareness content, cutting down or customizing the NoticeBored content is much easier and quicker than preparing it. 


May 18, 2017

NBlog May 18 - racing to rectify an Intel backdoor

A passing security advisory caught my beady eye this morning. It warns about a privilege escalation flaw in Intel's Active Management Technology, Small Business Technology and Intel Standard Manageability hardware subsystem incorporated into some of their CPU chips, ostensibly to facilitate low-level system management.

For convenience, I'll call it AMT.

18 days ago, Intel disclosed a design flaw in AMT that creates a severe vulnerability allowing hackers to gain privileged access to systems using the Intel “Q series” chipset, either locally or through the network depending on the particular technology.

In plain English, hackers and viruses may be able to infect and take control of your Intel-based computer through the Internet. It's similar to the WannaCry ransomware situation, only worse in that they don't need to trick you into opening an infectious email attachment or link: they can just attack your system directly.

The wisdom of allowing low-level privileged system management in this way, through hardware that evidently bypasses normal BIOS and operating system security (i.e. a kind of backdoor), is in question. In corporate environments, I appreciate the need for IT to be able to manage distributed devices, and I guess they sometimes need to handle unresponsive systems where the CPU has locked up for some reason. Fine if the remote access facility employs adequate authentication, and cannot be compromised. Coarse if not.

Anyway, moving on, evidently "Q series" chipsets installed in 2010 or later may be vulnerable. Some PCs from HP, Dell, Lenovo, Fujitsu, Acer, Asus, Panasonic and Intel are affected, plus others such as custom or home-brew systems.

Intel have kindly released a software tool to check the vulnerability of a given system ... which means downloading and installing a program from a company that has admitted to a severe security flaw in its products - a risk in itself that you might like to evaluate before pressing ahead.

If you are willing to take chances, the tool is simple to run, generating a report like this on a vulnerable system:

Intel also released a technical guide on how to mitigate the vulnerability by disabling AMT. If the following acronym-laden paragraph doesn't put you off, it's worth reading the guide:
"Intel highly recommends that the first step in all mitigation paths is to unprovision the Intel manageability SKU to address the network privilege escalation vulnerability. For provisioned systems, unprovisioning must be performed prior to disabling or removing the LMS. Pending availability of the updated Intel manageability SKU firmware, Intel highly recommends mitigation of the local privilege escalation by removing or disabling the LMS."
If that is pure Geek, you'd best contact your IT support, or the company that supplied your PC, or Intel ... but please not me. I'm struggling to understand it myself. What is "CCM" that is evidently not disabled, and should I worry about the running microLMS service?

Gary (Gary@isect.com)

May 17, 2017

NBlog May 17 - peripheral vision

Part of security awareness is situational or contextual awareness - being alert to potential concerns in any given situation or context. At its core, it is a biological capability, an inherent and natural part of being an animal. 

Think of meercats, for instance, constantly scanning the area for predators and other potential threats.

We humans are adept at it too, particularly in relation to physical safety issues. The weird creepy feeling that makes the hairs stand up on the back of your neck as you wander down a dark alley is the result of your heightened awareness of danger triggering hormonal changes. A rush of adrenaline primes you for the possible fight or flight response. I'm talking here about reflexes acting a level below conscious thought, where speed trumps analysis in decision-making.

When 'something catches your eye', it's often something towards the edge of your visual field: peripheral light receptors coupled with the sophisticated pattern-recognition capability in your visual cortex spot changes such as sudden movement and react in an instant, before your conscious brain has had the chance to figure out what it is. 

The same innate capability is what makes it hard to swat a housefly with your hand. It sees and responds to the incoming hand by springing up and away in milliseconds. [If you use a swatter with a lattice pattern, however, its compound eye and tiny brain gets confused over which way to fly - a fatal error!] 

You can probably guess where this is going. Security awareness works at both the conscious and subconscious levels. Short of radical surgery or a few million years of evolution, we can't change our biology ... but we can exploit it.

The conscious part revolves around rational thought - for example knowing that you might be sacked for causing a serious incident, or promoted for preventing one (if only!). We routinely inform, teach, instruct and warn people about stuff, encouraging them to do the right thing, behave sensibly. We hand out leaflets and briefings. We tell them to read and take note of the warning messages about dangerous links and viruses. We make them acknowledge receipt of the security policies, perhaps even test to make sure they have read and understood them. Through NoticeBored, we go a step further, prompting professionals and managers to address the information risks and implement good practice security controls. 

The subconscious part is more subtle. We don't just tell, we show - demonstrating stuff and getting people to practice their responses through exercises. We find interesting angles on stuff, using graphic illustrations and examples to open their eyes to the underlying issues.  We intrigue and motivate them, pointing out the dangers in situations that they would otherwise fail to recognize as such, removing their blinkers. We enhance their peripheral vision, and appeal to their emotions as well as their logical brains.  We make the shiny stuff glint, and make things feel uncomfortable when something isn't quite right. We like creepy. We heat topical infosec issues to make them hot, and chill good stuff to make it cool.

Consider the WannaCry incident: we couldn't predict precisely how, where, when or how the attack would come, but effective security awareness programs made people sufficiently alert to spot and react to the warning signs in a non-specific way. We're establishing a generalized capability, more than simply knowing about the particular nasty that happens to be ransomware ... or malware or phishing or social engineering or scams or ... whatever. 

The subconscious element is vital. If those hairs stand up when people receive dubious emails, phone calls, requests and other information, we are really getting somewhere. They still need to react appropriately, of course, which is generally a conscious activity such as don't click the link, and do call the help desk.

Gary (Gary@isect.com)

PS  I'm reminded of a standout line in the Faithless song, Reverence: "You don't need eyes to see, you need vision".