Welcome to NBlog, the NoticeBored blog

Like the finer things in life, quality trumps quantity.

Dec 2, 2016

Reflected anger



Friends,

Given my profession, I am of course utterly opposed to spam and dedicated to fighting the scourge, which makes it especially annoying when some noxious spammer uses one of my email addresses as the From: address for their nasty spam.

I usually discover this when assorted email servers send me error messages along the lines of "Sorry we could not deliver your spam".  Those reflected messages are just the tip of the iceberg, though, since I presume many other poor sods received the spam with my email address at the top.  Some of them probably cursed me.

Just in case any of them are reading this, I'd like to confirm that I am most certainly not a spammer.  I share your annoyance but it wasn't my fault!

Regards,

Dec 1, 2016

NoticeBored lifts the cover on privacy


Privacy, the NoticeBored security awareness topic for December, is a nebulous concept, more complex and involved than perhaps you might have thought if you accept that it includes concerns such as:

Compliance, obviously enough. Compliance with privacy or data protection laws and regulations was once described by Gartner as 'exceedingly complex', making it a significant challenge, especially for multinational organizations plus web-based companies and other with customers, suppliers and business contacts around the world. Workers' noncompliance with corporate privacy policies and procedures is another potential nightmare for management (with an obvious need for awareness - at least it is glaringly obvious to us!), while privacy-related contractual clauses concerning privacy and/or information security are hopefully not just put there to keep the lawyers occupied. Privacy is a substantial concern with professional services (such as outsourced HR or payroll) and cloud-computing services, particularly where personal data may be stored and processed in arbitrary global data center locations at the whim of the cloud infrastructure and load management systems. As if that's not enough already, laws, regulations, attitudes and practices in this area are constantly in flux. The EU General Data Protection Regulation GDPR and US-EU Privacy Shield are blazing hot topics right now, while we may be just moments away from breaking news on yet another massive privacy breach.

Human rights such as Article 8 of the EU Charter of Fundamental Rights:

Article 8
Protection of personal data
  1. Everyone has the right to the protection of personal data concerning him or her.
  2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.
  3. Compliance with these rules shall be subject to control by an independent authority.


Personal space and safety with a biological, evolutionary basis in territoriality (e.g. wild animals actively defend their home range to secure a food supply and maintain a safe distance from threats, including others of their own species).

Personal choice includes maintaining control over information about us, especially things that we consider intensely personal and – yes – private. We all want to be able to determine how much or how little we reveal about ourselves or keep secret, plus how we reveal things (which brings up the context) and to whom. The excitement of playing ‘truth or dare’ stems from choosing to disclose private matters among friends, whereas the prospect of being forcibly injected with a ‘truth serum’ is scary.

Trust and ethics: when we disclose our personal information to another person or an organization, we implicitly expect and perhaps explicitly require them to take due care of it and protect our interests. We have little option but to trust them to do so, which raises issues of trustworthiness, assurance and ethics. There are things we’d reveal to our doctor or partner that we’d be extremely reluctant to disclose to others.

Cultural norms such as differing attitudes towards public nudity, shows of affection and sexuality, both between and within various nations, societies and groups

Last but not least, information risk and security. For example, there is a marked distinction between us willingly offering our personal information to someone, and their stealing or illicitly capturing it, perhaps without our knowledge or consent.

Taking such a broad perspective on the topic lets us focus on the aspects of interest, concern and relevance for each of the main awareness audiences:
  • For the general employee/staff audience, the materials emphasize protecting personal information they may be handling at work. Persuading workers to treat personal data on customers, fellow employees etc. as if it was their own is a simple way to press home the need to take privacy obligations seriously. What to do if a worker spots or is informed about a privacy breach or other incident is another issue;
  • For management, compliance, governance, strategies and policies are clearly relevant - for example the organization's preparedness for GDPR and Privacy Shield is a strategic matter for the business, particularly if the decision is made to seize the opportunity to align privacy with information risk management, compliance, business continuity etc. using an ISO27k Information Security Management System;
  • For professionals and specialists, there are technology and compliance factors in relation to privacy, including practical challenges such as changing IT systems, websites, forms and business processes to bring them into compliance, encrypting portable devices and more.
Regards,

Nov 14, 2016

Infosec awareness lessons from NZ quakes


A big earthquake at midnight last night on the Northern end of South Island New Zealand was a major incident with various implications for incident/disaster management. I'd like to pick up on a few security awareness aspects while the incident is fresh in my mind and still playing out on the NZ media as I write this.
  1. There is a lot of effort put into preparedness for such events, across the whole country. For instance, the central safety message "Drop, cover, hold" is simple, widely repeated and used consistently in a variety of media and situations. Even the iconic images and colours (lots of black-and-yellow, warning colours with a strong biological basis) are consistent. Schools run classroom teaching on it. Websites and public safety demonstrations repeat it, frequently. There are flyers and leaflets, plus local, regional and national exercises to practice the actions, with extensive media coverage. "Get ready, get thru" is a strong theme. Full marks!  [I have a slight concern about tourists and visitors new to NZ: how are they informed? I appreciate the mixed messages in "Welcome to NZ. Learn how to survive your trip ..." but public safety must be a high or top priority, surely?].
  2. The preparedness includes an extensive monitoring infrastructure to identify and analyze quakes in near-real-time. The location and severity of the quakes was known in short order, triggering various warnings and analyses that have continued throughout the day. However, there was no pre-warning: notice the flat line on the seismometer image above, prior to the main event. Also, the geology is complex, so early news was uncertain and confusing. [I'm not sure it helped, in fact, other than to know that the scientists are busy examining the evidence. Some filtering and coordination of the messages would be good.]
  3. The preparedness also includes a variety of disaster communications arrangements, using multiple media and mechanisms both for broadcasting and for person-to-person comms between the authorities, emergency services, geophysical experts, MPs etc. The awareness message "Text don't call" is widely repeated (albeit without really explaining why). The information flowing today through the news media has been impressive in terms of both the volume and clarity. As reported by RNZ and Checkpoint, 'Christchurch Mayor Lianne Dalziel tells John Campbell people most affected by the earthquake want information. “It’s an absolute necessity to be completely open with people” she says. [Trustworthy official information about an incident just passed or still in progress, confidently expressed by the People In Charge, helps put minds at rest. Simply knowing that the authorities, agencies, utilities and emergency services are fully engaged in dealing with the situation is very calming, compared to either not being told, or worse still strongly suspecting that the response is inadequate. It's an assurance issue.]
  4. Communications in the immediate area of the quake were not so robust. Failure of landlines and cellphones, coupled with road and rail blockages, made it difficult to establish the situation and coordinate the response. While the telcos are fixing things, portable emergency radios were flown into the area by the military. Meanwhile, some people were unreachable (causing obvious concern for their families and friends) and it was difficult for the emergency services to assess and respond to the situation. [Lessons here concerning the need for emergency shortwave and satellite radios, I think, plus more generator backups for cell sites, and perhaps a tech facility to pass priority messages preferentially (if it isn't already in place). Also, on a personal note, we need to nominate a few contacts that we can inform following an incident so friends and family can confirm we're OK without going frantic.]
  5. The civil defence and emergency services are well planned, coordinated and practiced e.g. tsunami experts have been meeting every 30 minutes from an hour after the midnight quake, providing a remarkably consistent if cautious series of tsunami warnings. [Excessive caution is a concern: beyond some point, people who think the warnings are excessive "cry wolf" tend to ignore them, perhaps placing themselves in danger. The frequency and nature of warnings is a delicate balancing act. Some adjustment is called-for, I think, although I appreciate that an onshore quake gives little to no time to issue tsunami warnings.]
  6. The preparedness extends to a nation-wide resilience, a cultural aspect. People are genuinely concerned for each other and willing - in fact keen to help out. The news reporting naturally and genuinely emphasizes the human angles as well as factually describing the situation. Today we've heard from farmers worried about damage to their stock water supplies and milking sheds, civil defence and insurance people talking about what to do now, and MPs talking about their families - a broad spectrum. We are still getting occasional stories about people patiently waiting for their quake-damaged Christchurch properties and services to be repaired, and there is genuine concern about the traumatic effects of the latest quake and aftershocks on survivors of the Christchurch quake in 2011.
  7. The period shortly after the incident, while everybody is still thinking and talking about it, is an opportune time for further awareness messages, intermingling warnings and preparedness messages (such as "A good time to check emergency kits this evening as aftershocks continue to roll on.") with news of the event. [Personally, I think more could be done on this. If your organization suffered a major privacy breach, ransomware attack, hack or whatever, would you be in a position to blend-in related awareness messages with your planned incident/disaster comms, or would resources be stretched to breaking point already? If so, could you draft in additional helpers.]
  8. This was not a single point event: aftershocks are continuing (roughly every 3 minutes for the first few hours) and may continue for months or years yet. A small tidal wave of water on a river near Kaikoura this afternoon (released when a blockage cleared) was hot news a few minutes ago. There's also bad weather on the way, placing even more urgency on the emergency responses in the epicenter region since choppers may soon be grounded. [Infosec incidents also drag on and on, especially the big ones that hit the public news media. Managing the incident and the associated comms is therefore an issue well beyond the immediate period and aftermath.]
Regards,
Gary (Gary@isect.com)

PS  Even Google is playing its part.  I've just noticed the red message at the top of a query I did to find links for this very blog piece.  Good work Google!




Nov 7, 2016

Exploiting the privacy-infosec overlaps

We're working hard on the next NoticeBored awareness module concerning privacy, in particular we're exploring the changes coming with GDPR (the EU General Data Protection Regulation).  

Two concepts from article 23 of GDPR caught my beady eye this afternoon:

  • Privacy by design is the idea that privacy should be an integral or inherent part of the design of a new system, service or process, from the outset (as opposed to being tacked-on later, with all the compromises and drawbacks that normally entails); and
  • Privacy by default - where there are options or alternative paths, the ones offering the greatest privacy should be selected automatically unless the user or data subject explicitly chooses otherwise.  

It occurs to me that conceptually those are not a million miles from 'secure by design' and 'secure by default', two strategic approaches with substantial benefits for information security as a whole, including most of privacy ... which hints at the intriguing possibility of using the forthcoming GDPR implementation to drive improvements to both privacy and information security.

Several other obligations in GDPR are also directly relevant to information security, such as the ability for organizations to demonstrate or prove their compliance (implying an assurance element) and to ensure that all employees are aware of the privacy obligations.  In my opinion, privacy, information risk security, and compliance, substantially overlap as illustrated by the scope blobs above: the overlaps are not complete but the parts of privacy that do not involve information risk and security (e.g. 'personal space' and a person's right to determine how their personal information is used and disclosed), while important, are relatively minor.

Regards,

Oct 28, 2016

Which comes first, the awareness budget ... or awareness?


If your annual cycle matches the calendar year, you’re probably working hard on a 2017 budget proposal to secure the funding for all you need to do on information security, cybersecurity, information risk management, compliance, business continuity and so on - doing it yourself or maybe helping the boss.

Is security awareness and training part of the plan, hopefully not just a single line item but an integral part of virtually everything you are proposing to do?  If not, don't be surprised if, once again, you struggle to make much headway in information security in 2017. Security awareness is not just an optional extra but an essential prerequisite for success ... and the magic starts with senior management having sufficient knowledge, understanding and appreciation of what information security is all about to approve an adequate budget.

With that in mind, do you see the conundrum? Awareness is needed to gain funding for ... awareness and the rest of information security. How is that possible?

Here are five possible routes out of the paradox:
  1. Do nothing - the straw man option. Hope that your budget proposal is so self evident, so stunningly elegant and convincing that management is right behind you all the way. Good luck with that.
  2. Rely on management's general awareness and appreciation of information risk, security and related matters, since they are business people and these are business issues, right? Errr, maybe, but if you're actually talking about IT security or cybersecurity exclusively, you should not be surprised to find that management believes this to be an IT issue, making it part of the IT budget, meaning that you are hostage to whatever is planned for IT spend. Good luck again squeezing some cash out of the CIO and the IT organization that has its own investment objectives and plans that may or may not truly encompass yours. Worse still, do you see the gaping gap that opened up? What about all the rest of information risk and security that does NOT fall within IT or cybersecurity - including, yes, you guessed it, full-scope security awareness and training?
  3. Hope that previous awareness activities have achieved their aim, and that management is fully clued-up on this stuff. Perhaps you honestly believe it but, being a cynic, excuse me if I'm more than a little dubious. What makes you so certain that management gets it? Have you already been running an effective awareness program addressing the senior managers making those big budget decisions in strategic business terms that make sense (and if so, how did was it funded)? Or will you concede that this is just another cop-out? 
  4. Just do it, in other words run the corporate security awareness and training activities on a shoestring, eking out whatever funding you can beg, borrow or steal from other areas. Squeeze something out of IT, a bit more out of HR or training budgets. Code it as "risk management" or "compliance". Do the whole thing on the cheap, and yet overspend (then seek forgiveness). This is a surprisingly common approach in practice but it doesn't take a rocket scientist to spot the flaws and the missed opportunities. 'Just do it' implies a piecemeal, tactical approach with little forward planning or consistency throughout the year. You're unlikely to be able to employ an awareness specialist, but maybe you'll cross-charge an IT project for some awareness activities, perhaps stump up for the odd few hours of someone's time to prepare some materials, prioritizing that over funding someone's attendance at a professional security training course, conference or whatever - or not, as the case may be. 'Just do it' programs give security awareness, and information security, a bad name. We can do better than that, much better.
  5. Quickly plan and deliver security awareness activities specifically targeting senior management - in person - right now. The budget is a classic situation that benefits enormously from leg-work: some quality time spent one-on-one with senior managers, explaining and discussing your proposal over coffee, through email, on the phone or even snatched moments sharing the elevator to the exec suite, patiently listening to their queries and suggestions, and addressing their concerns, will pay off handsomely in due course when the budget proposals are duly considered. Start by thinking and talking seriously about how information security supports the achievement of business objectives. Look carefully at the corporate strategies and policies in this area for the security hooks. Go beyond the obvious compliance imperatives to find business opportunities that revolve around both protecting and exploiting information - BYOD, cloud and IoT security for three very topical if IT-centric examples. Find someone in Finance to explain the budgeting and forecasting process and help you craft an even better budget proposal, with clear objectives and measurable targets (yes, security metrics). Get the CIO, CLO, CRO and other key players on-board, for sure, and preferably others too. Identify the blockers and dig your secret tunnels under them. Build alliances and collaborate to accumulate. Sell sell sell.

By the way, option 5 is what your more politically-savvy 'competitors' in the budget race will be doing too. It's all part of the game, whether you call it awareness or schmoozing or persuading, even social engineering. For bonus points, find out what works for them and emulate the most successful ones. Why is it that Ed from Engineering always gets his way at budget time? What makes Engineering so special? Is it, perhaps, the way Ed puts it across as much as the literal words in the proposal ...?

Oh and keep notes for next year's budget rounds in the hope of making an earlier start and a better impression!

Regards,

PS  In the unlikely event that you find yourself long on funds and short of time, we can help you spend whatever's left in your 2016 information security/awareness and training budgets to avoid the dreadful shame of handing it back ... with the risk of a corresponding budget cut next year. Seriously, let's talk

Oct 22, 2016

A little something for the weekend, sir?


The following bullet-points were inspired by another stimulating thread on the ISO27k Forum, this one stemming from a discussion about whether or not people qualify as "information assets", hence ought to be included in the information asset inventory and information risk management activities of an ISO27k ISMS. It's a crude list of people-related information risks:

  • Phishing, spear-phishing and whaling, and other social engineering attacks targeting trusted and privileged insiders;
  • ‘Insider threats’ of all sorts – bad apples on the payroll or at least on the premises, people who exploit information gained at work, and other opportunities, for personal or other reasons to the detriment of the organization;
  • ‘Victims’ – workers who are weak, withdrawn and easily (mis)lead or coerced and exploited by other workers or outsiders;
  • Reliance on and loss of key people (especially “knowledge workers”, creatives and lynch-pins such as founders and execs) through various causes (resignation/retirement, accidents, sickness and disease, poaching by competitors, demotivation, redundancy, the sack, whatever);
  • Fraud, misappropriation etc., including malicious collaboration between groups of people (breaking divisions of responsibility);
  • Insufficient creativity, motivation, dynamism and buzz relative to competitors including start-ups (important for online businesses);
  • Excessive stress, fragility and lack of resilience, with people, teams, business units and organizations operating “on a knife edge”, suboptimally and at times irrationally;
  • Misinformation, propaganda etc. used to mislead and manipulate workers into behaving inappropriately, making bad decisions etc.;
  • Conservatism and (unreasonable) resistance to change, including stubbornness, political interference, lack of vision/foresight, unwillingness to learn and improve, and excessive/inappropriate risk-aversion;
  • Conversely, gung-ho attitudes, lack of stability, inability to focus and complete important things, lack of strategic thinking and planning, short-term-ism and excessive risk-taking;
  • Bad/unethical/oppressive/coercive/aggressive/dysfunctional corporate cultures, usually where the tone from the top is off-key;
  • Political players, Machiavellian types with secret agendas who scheme and manipulate systems and people to their personal advantage and engage in turf wars, regardless of the organization as a whole or other people;
  • Incompetence, ignorance, laziness, misguidedness and the like – people not earning their keep, including those who assume false identities, fabricate qualifications and conceal criminality etc., and incompetent managers making bad decisions;
  • Moles, sleepers, plants, industrial spies – people deliberately placed within the organization by an adversary for various nefarious purposes, or insiders ‘turned’ through bribery, coercion, radical idealism or whatever;
  • People whose personal objectives and values do not align with corporate objectives and values, especially if they are diametrically opposed;
  • Workers with “personal problems” including addictions, debts, mental illness, relationship issues and other interests or pressures besides work;
  • Other ‘outsider threats’ including, these days, the offensive exploitation of social media and social networks to malign, manipulate or blackmail an organization.

It's just a brain-dump, a creative outpouring with minimal structure. Some of the risks overlap and could probably be combined (e.g. there are several risks associated with the corporate culture) and the wording is a bit cryptic or ambiguous in places. I'm quite sure I've missed some. Maybe one day I will return to update and sort it out. Meanwhile, I'm publishing it here in its rough and ready form to inspire you, dear blog reader, to contemplate your organization's people-related information risks this weekend, and maybe post a comment below with your thoughts.

For the record, I believe it is worthwhile counting workers as information assets and explicitly addressing the associated information risks such as those listed above. You may or may not agree - your choice - but if you don't, that's maybe another people-related risk to add to my list: "Naivete, unawareness, potentially unrealistic or dismissive attitudes and unfounded confidence in the organization's capability to address information risks relating to people"!

Have a good weekend,

Oct 13, 2016

There must be 50 ways ...

Over on the ISO27k Forum today, a discussion on terminology such as 'virus', 'malware', 'antivirus', 'advanced threat prevention' and 'cyber' took an unexpected turn into the realm of security control failures.

Inspired by a tangential comment from Anton Aylward, I've been thinking about the variety of ways that controls can fail:
  1. To detect, prevent, respond to and/or mitigate incidents, attacks or indeed failures elsewhere (a very broad overarching category!);
  2. To address the identified risks at all, or adequately (antimalware is generally failing us);
  3. To be considered, or at least taken seriously (a very common failing I'm sure - e.g. physical and procedural control options are often neglected, disregarded or denigrated by the IT 'cybersecurity' techno crowd);
  4. To do their thing cost-effectively, without unduly affecting achievement of the organization's many other objectives ("Please change your password again, only this time choose a unique, memorable, 32 character non-word with an upside-down purple pictogram in position 22 and something fishy towards the end, while placing your right thumb on the blood sampling pricker for your DNA fingerprint to be revalidated");
  5. To comply with formally stated requirements and obligations, and/or with implied or assumed requirements and expectations (e.g. 'privacy' is more than the seven principles);
  6. Prior to service (flawed in design or development), in service (while being used, maintained, updated, managed and changed, even while being tested) or when retired from service (e.g. if  they are so poorly designed, so tricky to use/manage or inadequately documented that they are deprecated, even though a little extra attention and investment might have made all the difference, and especially if not being replaced by something better);
  7. As a result of direct, malicious action against the controls themselves (e.g. DDoS attacks intended to overwhelm network defenses and distract the analysts, enabling other attacks to slip past, and many kinds of fraud);
  8. When deliberately or accidentally taken out of service for some more or less legitimate reason;
  9. When forgotten, when inconvenient, or when nobody's watching (!);
  10. As an accidental, unintentional and often unrecognized side-effect of other things (e.g. changes elsewhere that negate something vital or bypass/undermine the controls);
  11. Due to natural causes (bad weather, bad air, bad hair - many of us have bad hair days!);
  12. At the worst possible moment, or not;
  13. Due to accidents (inherently weak or fragile controls are more likely to break/fall apart or be broken);
  14. To respond adequately to material changes in the nature of the threats, vulnerabilities and/or business impacts that have occurred since the risk identification/analysis and their design (e.g. new modes or tools for attack, different, more determined and competent attackers, previously unrecognized bugs and flaws, better control options ...);
  15. Due to human errors, mistakes, carelessness, ignorance, misguided action, efforts or guidance/advice etc. (another broad category);
  16. Gradually (obsolescence, 'wearing out', performance/capacity degradation);
  17. Individually or as a set or sequence (like dominoes);
  18. Due to being neglected, ignored and generally unloved (they wither away like aging popstars);
  19. Suddenly and/or unexpectedly (independent characteristics!);
  20. By design or intent (e.g. fundamentally flawed crypto 'approved' by government agencies for non-government and foreign use);
  21. Hard or soft, open or closed, secure or insecure, private or public;
  22. Partially or completely;
  23. Temporarily or permanently (just the once, sporadically, randomly, intermittently, occasionally, repeatedly, frequently, 'all the time' or forever);
  24. Obviously, sometimes catastrophically or spectacularly so when major incidents occur ... but sometimes ...
  25. Silently without the failure even being noticed, at least not immediately.

That's clearly quite a diverse list and, despite its length, I'm afraid it's not complete! 

The last bullet - silent or unrecognized control failures - I find particularly fascinating. It seems to me critical information risks are usually mitigated with critical information security controls, hence any failures of those controls (any from that   l o n g   list above) are also critical.  Generally speaking, we put extra effort into understanding such risks, designing/selecting what we believe to be strong controls, implementing and testing them carefully, thoroughly etc., but strangely we often appear to lose interest at the point of implementation when something else shiny catches our beady eye. The operational monitoring of critical controls is quite often weak to nonexistent (perhaps the occasional control test). 

I would argue, for instance, that some security metrics qualify as critical controls, controls that can fail just like any other. How often do we bother to evaluate and rank our metrics according to criticality, let alone explicitly design them for resilience to reduce the failure risks?

I appreciate I'm generalizing here: some critical controls and metrics are intensely monitored. It's the more prevalent fire-and-forget kind that worries me, especially if nobody had the foresight to design-in failure checks, warning signs and the corresponding response procedures, whether as part of critical controls or more generally as a security management and contingency approach.

Good luck finding any of this in ISO27k, by the way, or indeed in other information security standards and advisories. There are a few vague hints here and there, a few hooks that could perhaps be interpreted along these lines if the reader was so inclined, but hardly anything concrete or explicit ... which itself qualifies as a control failure, I reckon!  It's a blind-spot.

Regards,

PS  There's a germ of an idea here for a journal article, perhaps even a suggestion to SC 27 for inclusion in the ISO27k standards, although the structure clearly needs attention. More thought required. Comments very welcome. Can we maybe think up another 25 bullets in homage to Paul Simon's "Fifty ways to leave your lover"?

Oct 8, 2016

Marketing or social engineering?

Electronics supplier RS Online sent me an unsolicited promotional mailing in the post this week, consisting of a slimline USB stick mounted in a professionally printed cut-out card:




Well, it looks like something from RS' marketing machine.  It has their branding, images of the kinds of tools they sell and a printed URL to the RS website.  But the envelope has been modified ...


The printed sticker stamp top right has been crudely redacted with a black marker pen plus two further sticky labels, and 'postage paid' has been printed lower left, allegedly by the Hong Kong post office.  [I put the blue rectangle over my address.]

A week ago, we released a security awareness module on human factors in information security, including social engineering. Among other things, we discussed the risk of malware distributed on infectious USB sticks, and modified USB hardware that zaps the computer's USB port. The notes to a slide in the awareness seminar for management said this:
What would YOU do if you found a USB stick in your mailbox (at home or at work), or in the street, in the parking lot, in a corridor or sitting on your desk? 
In tests, roughly 50% of people plug found USB sticks into their computers.  A few of them may not care about the security risks (such as virus infections or physical damage that can be caused by rogue USB sticks), but most probably don’t even think about it – security doesn’t even occur to them. Maybe they simply don’t know that USB sticks can be dangerous.
Providing information about the dangers is straightforward: we can (and do!) tell people about this stuff through the awareness program.  But convincing them to take the risks seriously and behave more responsibly and securely is a different matter.  The awareness program needs to motivate as well as inform.  
The accompanying management briefing paper said:


It is possible that the USB stick carries malware, whether it truly originates from RS Online's marketing department in Hong Kong, or was intercepted and infected en route to me, or is a total fabrication, a fake made to look like a fancy piece of marketing collateral. I didn't request it from RS, in fact I've done no business with them for ages. The risk to loading the USB stick may be small ... but the benefit of being marketed-at is even less, negligible or even negative, so on balance it will be put through the office shredder.  It's a risk I'm happy to avoid.

Regards,
Gary (Gary@isect.com)

PS  The title of this piece is ironic.  Marketing IS social engineering.

Oct 2, 2016

People protecting people ... against people

We've just delivered the next block of security awareness materials to NoticeBored subscribers, some 210 Mb of freshly-minted MS Office content on the human side of information security.

The module covers the dual aspects of people-as-threats and people-as-controls. It's all about people.

The threats in this domain include social engineers, phishers, scammers and fraudsters, while controls include security awareness and training, effective incident response procedures and various other manual and administrative activities, supplemented with a few specific cybersecurity or technical and physical controls.

Whereas the awareness program has covered phishing and spear-phishing several times before, our research led us to emphasize "whaling" this time around. Whalers use social engineering techniques to dupe financial controllers and procurement professionals into making massive multi-million-dollar payments from fat corporate bank accounts into the criminals' money laundering machinery, where it promptly disappears as if by magic - not the entertaining kind of stage show magic where the lady we've just seen being sawn in half emerges totally unscathed from the box, more the distinctly sinister tones of black magic involving chicken body parts and copious blood.  

In comparison to ordinary phishing, whaling attacks capture fewer but much bigger phish for a comparable amount of investment, effort and risk by the fraudsters. We are convinced it is a growing trend. Luckily, there are practical things that security-conscious organizations can do to reduce the risk, with strong security awareness being top of the list. As with all forms of information security, we accept that widespread security awareness (a 'security culture') is an imperfect control but it sure beats the alternative. What's more, awareness is much more cost-effective than most technological controls, especially in respect of social engineering and fraud. Artifical intelligence systems capable of spotting and responding to incidents in progress are under development or in the fairly early stages of adoption by those few organizations which can afford the technology, the support and the risks that inevitably accompany such complex, cutting-edge systems. In time, the technology will advance, and so will the threat. Security awareness will remain an essential complement, whatever happens. 

If building your security culture is something you'd love to do, if only you had the time and skills to do it, get in touch. Our people are keen to help your people.

Regards,

Sep 28, 2016

ISO27k Conference, San Francisco

I'm at the 27k Summit for the Americas ISO27k conference at the South Francisco Conference Center near San Francisco airport this week, hoping to meet you!

The conference has several parallel themes and streams, including:
  • Getting started with ISO27k - for people who want to get into this stuff
  • Metrics - for people who need to measure and improve this stuff
  • Cloud security and IoT - hot topics
  • Compliance - a meta-theme since laws, regs and standards compliance is a strong driver for all the above
If I have time I'll update this post with info as the conference proceeds ....
  • Jim Reavis from the Cloud Security Alliance gave a keynote about the proliferating cloud and IoT systems, globally expanding. CSA's CCM compliance/controls mapping is well worth looking at, while the CSA STAR program is a popular certification scheme for cloud providers.
  • Dan Timko from Cirrity explained the ISO27k ISMS implementation and certification process, including the pre-certification followed a few months later by the stage 1 audit and just 5 weeks later the 'real' stage 2 certification audit. Most of the implementation effort went into documentation - documenting their policies and existing processes. For example, informal meetings 'didn't happen' if there was no record to prove it to the auditors, so meeting minutes etc. are much more common now.
  • Richard Wilshire from Zygma gave a brief introduction to the forthcoming thoroughly revised version of ISO/IEC 27004 on metrics (called 'measures' or 'measurements' in ISO-speak: 'metrics' is a forbidden word!) supporting the ISMS specified in ISO/IEC 27001. He covered the basic questions about metrics e.g. why measure (for accountability and to drive ISMS performance in the right direction, and for compliance with 27001 clause 9.1 of course), what to measure (mostly the status of systems, controls and processes), when to measure (periodic data generation, analysis and reporting, plus ad hoc or event-driven metrics with analysis and reporting triggered by events or threshold values), who measures (several part-time roles suggested in the standard). The new version of 27004 should hopefully fall off the ISO glacier some time next year.
  • Walt Williams from Lattice explained about developing metrics for business needs, not just for ISO27k compliance reasons. Setting goals helps e.g. a commonplace goal such as having zero privacy incidents directly suggests a simple metric. Reviewing goals and metrics drives improvement in your metrics.
  • Gary Hinson from IsecT (me!) spoke about using GQM and PRAGMATIC to select/design, improve and get value from security metrics, in the ISO27k context, meaning information security for business sake. It seems to me that 'security metrics' are too often based around the availability of data generated automatically by technical security controls such as antivirus systems and firewalls, with little obvious relevance to the business. Tech-driven security metrics are not valued by general managers, whereas business-driven security metrics are right on-topic.
  • Michael Fuller from Coalfire talked about ISO/IEC 27018, a standard about adapting/using the controls from ISO/IEC 27002 to ensure privacy for public cloud services. CSA STAR got another mention as a structured way of not just putting appropriate controls in place, but in an assured/certifiable form (with 3 levels, the lowest of w hich I believe is 'just' an assertion of compliance).
  • Jorge Lozano from PwC addressed the design of metrics concerning performance of an ISO27k ISMS, based on the measurement requirement specified in 27001 and the metrics advice in 27004. He outlined a few example metrics similar to those appended to 27004, in a tabular format describing, on one screen per metric, its purpose, the way it is measured, and defined objectives or goals (target values and timescales). He then showed how the example metrics might be reported. Jorge recommended using risk-driven metrics because management understands risk. [I would argue that metrics should be business driven for the same reason, but in practice these are similar and complementary approaches.]
  • Sumit Kalra from bpmcpa spoke about using ISO27k for compliance with multiple obligations, from the perspective of a compliance auditor. Sumit argued that all today's [information security related] compliance requirements are fundamentally the same, with relatively minor differences in the details but 'a structured approach' in common, hence it doesn't particularly matter which way you approach the process.
  • Amit Sharma from Amazon Web Services briefly introduced AWS but mostly spoke about AWS security. Issues include: visibility (clouds are, well, cloudy!); security controls (e.g. customers should use data encryption, AWS or customers can manage private keys); auditability and monitoring (of manual and automated activities behind the scenes, and security status); tech complexity and 'polymorphism' (ongoing infrastructural changes are challenging for customers, especially for agile e.g. DevOps companies making frequent releases); compliance and regulatory interest (e.g. ISO/IEC 27001, PCI, HIPAA & other certifications); planning and coordinating stuff involves collaboration between multiple teams and takes time and management. Customers who don't use all the automated tools for reprovisioning etc. but do stuff manually can cause problems for AWS [they lose some control - the struggle between AWS and customers to control the IT environment resembles that between traditional IT departments and 'the business']. Standardization helps (e.g. sensible defaults, templates) plus automation.
  • David Cannon from CertTest spoke about a cookie-cutter approach to quickly rolling out secure platforms and apps, which he called "an ISMS" with a very narrow scope (the narrower the better, it seems ... if your goal is to hoodwink management or business buyers, that is).
  • Alan Calder from IT Governance spoke on using ISO27k for GDPR and NIS compliance i.e. privacy/data protection (for the EU, including service providers serving EU clients) coming into effect in May 2018. Alan gave a good background briefing about how the EU as a whole governs privacy for EU citizens, and on the forthcoming regs ... with citizen rights, compensation and fines up to 20 million Euros or 4% of global turnover (!), and fundamental privacy principles (as opposed to mandating explicit controls and tick-box compliance). The principles include informed consent, data protection, the right to be forgotten, data breach reporting within 72 hours etc. Alan mentioned the lapsed Safe Harbor and forthcoming Privacy Shield agreements between the EU and USA.
  • Rob Giffin from Avalution Consulting presented on business continuity, using ISO 22301 (and other standards in the series) along with other management systems including an ISO27k ISMS (hence synergies mean collaboration is mutually beneficial). The implementation activities are similar e.g. clarify the goals of BCM (in business activity terms), the scope, the resources, the business contacts, the plans, the support tools ... 
Overall, the conference was a melting pot for ISO27k-related topics and professionals in the field, both greybeards and newbies. It was good to see so much interest in the standards, and so much free exchange of information. As with other conferences, the presentations were valuable and so were the off-line discussions and contacts with peers and friends, old and new.

Regards,