Welcome to NBlog, the NoticeBored blog

I may meander but I'm 'exploring', not lost

Dec 31, 2015

Information risk and security tools

We've just completed and delivered a brand new NoticeBored awareness module for January 2016 concerning the tools supporting information risk and security:


Scope of the awareness module



There are literally thousands of tools in the information risk and security space. One of the more technical awareness papers in the module discusses some 68 types of tool - that not merely 68 actual products but 68 categories with numerous tools in each. We could have kept going but 12 pages was more than enough for a 'briefing'!
In scoping, researching and preparing to write the module, we faced up to the possibility that the awareness materials might inadvertently spark an interest in the dark side among our customers' workforces. Many of the sexiest tools in the toolbox could be classed as dual-use weapons technology, valuable for good and evil purposes. In fact, many of them owe their very existence to the crucible of creativity and passion that is hacking. Our response was to be open about the concern, and suggest a means of keeping the lid on it through a policy to control security tools - a governance tool that is.
'Tools' is the 58th topic in our bulging awareness portfolio. It is gradually becoming harder year-by-year to find new angles on information risk and security but we're certainly not done yet! We routinely scan the crystal-ball-gazers' pontifications at this time of year, looking for hints on what might be galloping over the horizon towards us. Looking back at the year just gone, we picked up on information risk, the Internet of Things and cybersecurity for the first time, as well as updating the content on another nine awareness modules. Keeping up with constant developments in the field is what keeps me going, stops me getting bored stiff.
What about you? What excites or indeed scares you about working in this field? What do you see in the way of emerging threats, new challenges and novel approaches as we nudge over into 2016? What's keeping you awake at nights?

Happy new year!
Gary (Gary@isect.com)

Dec 24, 2015

Air Canada phone scam takes off

If someone from Air Canada calls you about a flight booking, there's a good chance it's a social engineer trying to steal your credit card number and/or other valuable info.

I guess the scammers in this case might be calling people totally at random on the off-chance that some of them have recently booked flights on Air Canada, but given the specificity of the scam, it's more likely they are working their way through a list of Canadians who routinely travel by air, or at the very least people with Canadian phone numbers. Possibly they have discovered a way to identify specifically those people who have booked with Air Canada. Maybe the info is deliberately published on a public website or service for some reason (e.g. for passenger safety or visa checking?). Maybe Air Canada's booking systems have been compromised/hacked, or those of an intermediate such as a travel agent, booking agency, flight scheduling company, airport, loyalty card scheme, or ISP or .... well that's the point really: there are lots of people, organizations, systems, networks and services involved in the process, all of which need to be well secured. All it takes is one teeny leak to bring the entire dam crashing down.

By the way, the same concern applies to other airlines besides Air Canada, and to many other kinds of booking systems/processes (hotel bookings, hire car hirings etc.). In fact the fundamental security issue is much broader: virtually any situation in which someone hands over or submits online their credit card number or other info could be used by social engineers as a pretext to call or email or text them "to check a few things" or "audit the records" or "correct an error" or "re-run a failed transaction" or "run a quality check" or "do a quick customer survey" or "offer a free entry in our prize draw" or whatever. The door is wide open for creative social engineers, and don't they know it.

What makes this worse is that many organizations routinely contact their customers for legitimate reasons in ways that are practically identical to competent social engineering attacks. The savvy ones are concerned to identify the customer on the other end, typically asking personal questions ... which is of course an excellent pretext used by social engineers. Few organizations, even the good ones, consider the customer's security/privacy perspective. 

If someone claiming to represent, say, my bank or insurance company calls or emails me about something, how am I meant to determine that they are genuine? 

If I have done something recently through the bank, and if they refer to that specifically up front in the call or emails, I'm more likely to assume it is a genuine contact ... but as the Air Canada scam demonstrates, that's a rotten control. The same issue applies to phishing emails which just happen to come from a company that I've been dealing with around the same time. By sheer coincidence, there's a higher than normal probability of me swallowing the bait.

Some organizations have thought this through and have the capability for mutual authentication. A pretty good technique is to offer a 'secure messaging' facility through their websites, so on receiving an ordinary phone call or email from them, customers can authenticate the website (e.g. by checking its URL and SSL certificate), login (i.e. identify and authenticate themselves), then access the secure messaging function to interact and deal with issues online. But social engineers can exploit that rigmarole (e.g. classic phishing emails with URLs to fake websites that capture the credentials from people who don't check the true destination), and it delays and complicates the process.

Another technique is for the organization to hold and prove ownership of a unique password for each customer, in much the same way that customers present their unique passwords at login ... but this is also vulnerable to social engineers who first make one or more calls to the organization to capture that password, then call the customer and 'authenticate' with the captured password (an example of a TOCTOU attack that exploits the time delay between Time Of Check and Time Of Use). Mutual authentication needs to be simultaneously performed in both directions, or at least in the course of a single interaction.

What worries me more is that a substantial proportion of people have absolutely no understanding of, or interest in, this issue. Many of us these days are broadly aware of identity theft in general terms, having experienced it first- or second-hand but I seriously doubt that many appreciate just how creative, cunning and ruthless the social engineers have become, nor how easy it is to create and execute novel scams such as the Air Canada thing. The black hats have the upper hand, leaving us on the back foot. There's only so much we can do in the way of security awareness, even if we utilize social engineering techniques ourselves.

Regards,

Dec 16, 2015

The Realistic CISO


In information security, pessimism goes with the job.  It's one of hazards of our profession. It's pretty much expected of us in fact. As a general rule, we infosec types obsess about downsides - things going wrong; attacks, accidents and other incidents occurring; noncompliance; 'bad luck'. We are openly cynical or dismissive about claims or implications of perfection in our security tools. We sincerely doubt all bases are ever covered. We see little gaps and worry about dark, gaping holes in our defenses. We generally anticipate bad news, honestly believing that our adversaries carry most of the cards (including all the aces!). We long for better security metrics, while delivering a mish-mash of half-baked, partially irrelevant and largely distracting information to management in a failed attempt to compensate for our pessimistic outlook: we feel the need to be able to say "See, I told you so" when bad stuff [inevitably] happens. 

The realistic CISO is, first of all, sufficient self-aware to appreciate his/her inherent pessimism, hopefully well enough to accept that it might be a barrier to success in business and in the profession. We occasionally see little glimpses of light, for instance when we acknowledge that the flip side of risk is opportunity, and that there may be legitimate reasons for management accepting information risks that we personally find uncomfortable ... but then we drop the blinds by insisting that risk owners formally accept the risks, absolving us of all blame if bad stuff eventuates (and, by the way, forgoing a large part of the credit if things turn out OK after all).

Second, the realistic CISO anticipates that although a gazillion things could go wrong, things generally do work out OK, on the whole. The realistic CISO knows that good enough security is not only usually good enough, but way cheaper than striving for perfection (which, of course, is unattainable anyway). It's a pragmatic approach with a valuable bonus: good enough security is generally quicker and easier to implement than perfection, so while it may not achieve the maximum possible level of loss reduction, the benefits start to accumulate earlier and over a longer period while the implementation costs may be substantially lower. Good enough security may in fact be the optimal solution. Gosh, imagine that! Despite the oft-repeated mantra that the black hats only need to find and exploit the gaps with the implication that white hats need to close every gap, the realistic CISO focuses on closing the gaps that really matter, using multiple layers of control to deter, restrict and frustrate attackers and contain the damage within acceptable bounds, rather than forlornly trying to prevent all incidents.

Third, the realistic CISO is sensible enough to juggle competing priorities - not just preventive controls but early incident detection and sound incident management, a strong capability for business continuity (resilience and recovery and true contingency planning), systematic learning and continuous improvement, plus most importantly of all strategic alignment with business priorities. The realistic CISO appreciates that the infosec profession has high ideals with expectations that don't always match the organization's. The realistic CISO knows that the business has numerous objectives, goals and anti-goals, has disparate stakeholders with some conflicting expectations and requirements, and exists in a dynamic context. The realistic CISO is not merely plugged-in to senior management's social network but an integral part of it, helping to formulate strategy and drive the business forward as much as being being driven by it. That takes personal integrity, persuasive skills and aptitudes way beyond the sphere of cybersecurity.

Fourth, the realistic CISO isn't aghast to discover that colleagues may be willing to push things to or beyond the limit, perhaps exceeding the boundaries of ethical and legal behavior in the interests of taking advantage and exploiting opportunities.  

In summary, the realistic CISO is a mature, upbeat, self-aware pragmatist with a strong urge to look into and beyond the looming storm clouds to spot not just bolts of lightning but silver linings. I'm hinting at expunging the final vestiges of The No Department - you know, the security function whose immediate, default reaction to virtually every request or enquiry is a resounding "No!". 

"Instead of saying no to new technologies, ideas and capabilities in the name of security, try to find a way to say yes. Individuals within the organization often assume that the position of the risk and security professional or program is to restrict the use of new technologies, ideas and capabilities. A more effective approach is to embrace technological changes while at the same time educating the individuals who want to use new technologies about the appropriate information risk and security considerations, concerns and requirements that need to be accommodated as part of their use. This will empower individuals to able to make informed decisions about the use of these resources and, at the same time, ensure they are aware of their risk and security obligations."
John P. Pironti 

Whereas getting to "Yes!" as a stock response may be a step too far, the CISO who tends towards "Yes but ..." or "Yes provided ..." may turn out to be a boon to the organization rather than a barrier, which in turn will unlock some of those relationship benefits I've just mentioned, earning the respect and trust of senior management colleagues.

Regards,

Nov 30, 2015

Information risk awareness

In line with common practice, we've covered "information security risk" previously in the NoticeBored security awareness materials. Virtually all the awareness modules cover information security, so this time around we've refocused the module on information risk, information risk management (IRM) especially.

The diagram below sums up the guts of the classic IRM process: identify then assess information risks, choose how to treat them, implement the treatments, then loop back to pick up and respond to changes.


There's more to it than that, for instance information must flow to and from management (e.g. information risk levels, business priorities and risk appetite) while suitable metrics are necessary to manage and improve the process systematically.

Talking of which, I'm currently reading a fascinating account of how High-Reliability Organizations (HROs) use Highly Reliable Security Programs (HRSPs) to drive improvements in their information risk and security management activities. The book's author (Lance Hayden) lays out a strong case for milking every last drop of value from incidents and near-misses (or near-hits, as he calls them!) rather than - as most organizations do - paying lip-service to incident investigation, hoping that everything is quickly forgotten ... and consequently suffering the same or similar incidents repeatedly. As a fan of security metrics, security culture, systematic learning and improvement, the core idea resonates with me. I'll be reviewing "People-centric Security: Transforming your Enterprise Security Culture" here as soon as I've finished the final chapters and mulled it over. It's certainly food for thought, so on that basis alone the book is well worth a look.

Regards,

Nov 27, 2015

Oz terrorism alerting scheme

A new public alerting scheme for terrorism was introduced in Australia this week, with the 5 color-coded levels shown here. The previous scheme, introduced in 2003, had 4 levels (low, medium, high and extreme), primarily reflecting the scale or severity of the threat.  The new scheme's levels primarily reflect the probability of an attack.

I'm puzzled because, as generally understood, risk reflects both aspects - the likelihood, probability or chance of an incident coupled with its scale, severity, consequences or impact.

With the new system, even if a threat is deemed "certain" and coded red, the scale gives us no idea of the likely scale of the incident/s.  Are we talking about a lone gunman on the rampage in one location, a coordinated series of attacks across a number of locations, or what?

Perhaps I should suggest the Analog Risk Assessment method to the Australian government.

Regards,

Nov 26, 2015

Self-phishing own goal

With hindsight, perhaps it wasn't such a bright idea for an information security company to send out an email promoting phishing awareness, encouraging its readers to click an embedded blog link ... pointing to a different domain than the address of the sender of said email:




Regards,

Nov 24, 2015

CISO/ISM ethics

If you had the requisite access, skills and opportunity to defraud or otherwise exploit your employer (which, I suspect, many of us in this profession do), would you be tempted to take advantage? Not even a tiny bit? What if the ‘social contract’ with your employer was seriously strained for some reason - something had soured the relationship, putting your nose out of joint, once too often? 

If you were so inclined, how much effort would you be willing to expend to 'get your own back'? Would you feel justified in causing material harm? Would you be willing to break the law? Would it matter if you worked for a bank, the government, a charity or family business?

And how cautious/subtle/sneaky would you be about it? What if the potential prize on offer was, say, more than $10 million: how tempting would that be? How much caution and risk mitigation would $10m buy you?

Stories like that make me wonder idly about my personal integrity and ethics. If everyone has their price, what’s mine? Despite my high ideals and glinting halo, I suspect, regretfully, that it is very, very large ... but probably not infinite. I'm human, after all.

Even raising and contemplating the remote possibility makes me feel very uncomfortable in my own skin, but that's not a good reason to ignore the risk. I'm not talking about straightforward greed, malice and criminality here. We obviously need to deal with those but bad apples in our midst pose different challenges. 

How on Earth can our organizations reasonably expect to counter such unlikely but severe insider threats? The guy in the news story made fundamental information security errors, clearly, but how many others slip through the net or, worse still, sneak under the radar without their indiscretions ever being discovered? 

Thinking back over my career in information security and IT audit, I’ve worked with some absolutely superb, consummate professionals to whom I’d happily trust my life (literally), plus a few distinctly dubious characters who couldn’t sell me a used car … and, to be frank, I don’t know which category worries me the most. My character assessment abilities have proven pretty good on the whole but definitely not perfect. I've been taken-in by fakes and fraudsters from time to time: social engineering is a powerful weapon. I wonder what I've missed? I wonder who I might have sleighted by doubting their word (just doing my job, you understand, but I do have a conscience).

Regards,
Gary (Gary@isect.com)

Nov 20, 2015

Decision-led metrics

Metrics in general are valuable because, in various ways, they support decisions. If they don't, they are at best just nice to know - 'coffee table metrics' I call them. If coffee table metrics didn't exist, we probably wouldn't miss them, and we'd have cut costs.

So, what decisions are being, or should be, or will need to be made, concerning information risk and security? If we figure that out, we'll have a pretty good clue about which metrics we do or don't want.

Here are a few ways to categorize decisions:
  • Decisions concerning strategic, tactical and operational matters, with the corresponding long, medium and short-term focus and relatively broad, middling or narrow scope;
  • Decisions about risk, governance, security, compliance ...;
  • Decisions about what to do, how to do it, who does it, when it is done ...;
  • Business decisions, technology decisions, people decisions, financial decisions ...;
  • Decisions about departments, functions, teams, systems, projects, organizations; 
  • Decisions regarding strategies/approaches, policies, procedures, plans, frameworks, standards ...;
  • Decisions relating to threats, vulnerabilities and impacts - evaluating and responding to them;
  • Decisions made by senior, middle or junior managers, by staff, and perhaps by or relating to business partners, contractors and consultants, advisors, stakeholders, regulators, authorities, owners and other third parties;
  • Decisions about effectiveness, efficiency, suitability, maturity and, yes, decisions about metrics (!);
  • ... [feel free to bring up others in the comments].

Notice that the bullets are non-exclusive: a single metric might support strategic decisions around information risks in technology involving a commercial cloud service, for instance, putting it in several of those categories. 

If we systematically map out our current portfolio of security metrics (assuming we can actually identify them: do we even have an inventory or catalog of security metrics?) across all those categories, we'll probably notice two things. 

First, for all sorts of reasons, we will probably find an apparent excess or surplus of metrics in some areas and a dearth or shortage elsewhere. That hints at perhaps identifying and developing additional metrics in some areas, and cutting down on duplicates or failing/coffee-table metrics where there seems to be too many which is itself a judgement call or a decision about metrics - and not as obvious as it may appear. Simplistically aiming for a "balance" of metrics across the categories is a naive approach

Second, some metrics will pop up in multiple categories ... which is wonderful. We've just identified key metrics. They are more important than most since they evidently support multiple decisions. We clearly need to be extra careful with these metrics since data, analysis or reporting issues (such as errors and omissions, or unavailability, or deliberate manipulation) is likely to affect multiple decisions.

Overall, letting decisions and the associated demand for information determine the organization's choice of metrics makes a lot more sense than the opposite "measure everything in sight" data-supply-driven approach. What's the point in measuring stuff that nobody cares about? 


Security awareness without resources - five Hinson tips

While listening to a couple of ISSA webinars on security awareness and idly scribbling notes to myself, I've been mulling over the common refrain that 'We just don't have the resources for security awareness'. 
One of the speakers said something along the lines of "I've never had the luxury of anyone on the payroll to do security awareness, except me and I'm always busy. I don't think we'll ever have anyone to do it full time, maybe a quarter FTE next year if we're lucky". This is for a healthcare organization with over 20,000 employees. 
That struck me as a depressing, almost defeatist attitude. I honestly struggle to believe that their management doesn't support security awareness, given how absolutely crucial it undoubtedly is to meet their security and privacy obligations and business challenges. How can they possibly afford NOT to do security awareness? I suspect the real problem lies not so much with management's resistance to the idea but with the lack of push. Too much else on the go maybe. Nobody with the ooomph to build a convincing business case perhaps.
I thought I'd share with you some more optimistic tips on how to make a success of security awareness even if you are resource-constrained:
  1. Figure out what resources you actually have. Hint: it's not zero. For starters, you are thinking about it right now. Your head-space and interest in the topic constitutes a resource. So are your learned colleagues in Information Security plus related departments and teams such as Risk, Compliance, Site Security, HR, Health and Safety, Operations, IT, Training, Employee and Corporate Comms. Friends/supporters of security throughout the organization are resources (security is everybody's business, remember). This blog and a gazillion others are resources. There are websites, professional associations, Google, social media, magazines, newspapers, the TV news and documentaries. There are textbooks and articles, vendor white papers (and courses and collateral and freebies), marketing materials, course books ... Even if you think you have nothing, in reality you have enough to make a start. Actually, you have more than enough: the hard part is sifting though for valuable nuggets, and making good use of the available resources. So please don't pretend you are a pauper. You are information rich, time poor maybe but hardly destitute.
  2. Beg, steal or borrow even more resources. Hustle. Horse-trade. Collaborate with your colleagues - the pro colleagues noted above plus 'management' and last but not least 'staff' (you are proactively investing in your personal social network throughout and beyond the organization, right? ....) If you can, call upon, dip into or exploit other departments' resources e.g. the training budget; the new employee orientation budget; the corporate comms budget; the intranet/web development budget; business and IT project budgets ... Use interns and temps. Call in favors and offer your skills and expertise to those who need it (every such interaction is an awareness opportunity). Use internal surveys, competitions and challenges to both engage your workforce and develop additional content (anecdotes and case study materials, for instance) and metrics. Find people who are good at what you need. [Blatant plugFarm out the hard graft of researching and preparing creative content to security awareness professionals who relish the opportunity. It's cheaper, easier and more effective than doing it all yourself!]
  3. Milk every last drop of value out of the resources you do have. Work your resources harder. Get creative. Challenge and encourage colleagues to come up with good ideas. Prioritize. Consider the value of the activities you are currently doing and planning/thinking about. Invest in things that will deliver value over the long term rather than just spending on short-term fixes for immediate needs. Scrimp and save, manage your resources. Squeeze the slack out of other activities and divert/redeploy the funds and other resources towards more cost-effective stuff. Play the games that people play. If you must, overspend on things that management can't reasonably deny are important. Watch the pennies. Track the value.
  4. Measure and improve systematically. Use maturity measures, surveys and other metrics to get on the front foot. Instead of lamely alluding to progress, success and value, dig out and exploit hard evidence demonstrating that security awareness activities are actually delivering beneficial cultural changes in the organization and ultimately, of course, saving money by reducing the number and severity of incidents. Demonstrate that your awareness program is adding real value, and that the organization would be much the poorer without it (the straw man approach). Be explicit and specific about the resourcing constraints on what you do and can achieve in order to justify and persuade management to make additional targeted investments, or at least to reprioritize things ...
  5. Aim higher. Justify and push for (further/sufficient) investment in security awareness. Learn from other departments, projects and initiatives about getting support for ... initiatives, projects and departments. Develop a coherent, sensible strategy that pushes security awareness and training as a way to support and enable the business (please, not security for security's sake; by all means make awareness an integral part of your overall information or cyber security program but make darned sure that is business-driven) and sell it. Garner management's support through a well-constructed business case and plenty of one-on-one time informing, refining, persuading and motivating management. Most of all make it work. Meeting/exceeding management's expecations is important to your credibility, if you expect future budget requests and proposals to be well received.
Good luck.  We're right behind you!
Gary (Gary@isect.com)


PS Here's a free bonus tip. I mentioned 'scribbling notes to myself'. My main notebook is a virtual scratchpad, nothing fancy, just a plain text file linked from my desktop. I quickly jot down bright ideas that I come across or come up with so that I can contemplate, develop, combine and use them later on when I have more time. I also use this blog as a place to document, develop and share my thoughts. Priceless!

Nov 12, 2015

Metrics database

I wonder if any far-sighted organizations are using a database/systems approach to their metrics? Seems to me a logical approach given that there are lots of measurement data swilling around the average corporation (including but not only those relating to information risk, security, control, governance, compliance and privacy). Why not systematically import the data into a metrics database system for automated analysis and presentation purposes? Capture the data once, manage it responsibly, use it repeatedly, and milk the maximum value from it, right?

If you think that's a naive, impracticable or otherwise krazy approach, please put me straight. What am I missing? Why is it that I never seem to hear about metrics databases, other than generic metrics catalogs (which are of limited value IMNSHO) and Management Information Systems (which were all the rage in the 80s but strangely disappeared from sight in the 90s)?

Conversely, if your organization has a metrics database system, how is it working out in practice?  What can you share with us about the pros and cons?

Domain status update spear-phish

Look what just fell into my inbox.  Legit, crude spear-phish or just plain nuts?



I already own ISO27001security.com which is presumably why they think I might be interested in iso27k.com (I'm not!), but this is such an obvious con, I'd have to be a complete mindless idiot to fall for it.

[I've crudely redacted their URL: please don't try to reconstruct and visit it unless you actually want your system to be compromised - and don't blame me!]

Regards,

Oct 31, 2015

Social insecurity - security awareness gets personal

The NoticeBored awareness topic for November is ‘social insecurity’, meaning information security and privacy risks, controls and incidents involving and affecting people:

  • Social engineering scams and frauds, especially phishing and spear-phishing by email and phone;
  • Harvesting of information and exploitation of people via social media, social networks, social apps and social proofing e.g. fraudulent manipulation of brands and reputations through fake customer feedback, blog comments etc.;
  • The use of pretexts, spoofs, masquerading and coercion - social engineering tradecraft;
  • Serious corporate risks involving blended/multimode attacks and insider threats e.g. the exploitation of colleagues through social engineering attacks by power-hungry assertive workers with personal agendas (aka “company politics”).

While technical measures (such as anti-spam utilities and email software that disables links and attachments in suspicious messages) help to some extent, security awareness and training are, of course, the primary means of control in practice, especially when it comes to more advanced attacks representing the greatest risks.  Nothing beats having an alert, well-motivated workforce with the wherewithal to notice and react appropriately to suspicious goings-on.

Motivation is the key to making awareness programs effective.  Going beyond merely making people aware of things, our aim is to make them think and most of all behave more securely, for instance spotting the warning signs of possible phishing attacks, and reacting appropriately instead of blithely clicking and jabbering away.

Rather than trotting out the same old same old, NoticeBored delivers fresh perspectives every month, helping employees stay ahead of today’s security challenges.  Having covered social engineering, social media and social networks a few times before, the awareness content was thoroughly revised and updated to pick up on current incidents and controls in this area, with an eye towards adverse trends and emerging threats


Regards,

Oct 12, 2015

Unafe Harbor


After 15 years of tenuous operation and months of speculation, the EU/US Safe Harbor arrangement is sunk. According to SC Magazine:
"In a decision with widespread implications for the international transfer and processing of data - and the companies that provide these services - the European Court of Justice has ruled the EU-US Safe Harbour pact invalid. Experts are warning of massive disruption to international business."
Safe Harbor was formally implemented by the US Department of Commerce in July 2000:
"Decisions by organizations to qualify for the safe harbor are entirely voluntary, and organizations may qualify for the safe harbor in different ways. Organizations that decide to adhere to the Principles must comply with the Principles in order to obtain and retain the benefits of the safe harbor and publicly declare that they do so. For example, if an organization joins a self- regulatory privacy program that adheres to the Principles, it qualifies for the safe harbor. Organizations may also qualify by developing their own self- regulatory privacy policies provided that they conform with the Principles. Where in complying with the Principles, an organization relies in whole or in part on self- regulation, its failure to comply with such self- regulation must also be actionable under Section 5 of the Federal Trade Commission Act prohibiting unfair and deceptive acts or another law or regulation prohibiting such acts. (See the annex for the list of U.S. statutory bodies recognized by the EU.) In addition, organizations subject to a statutory, regulatory, administrative or other body of law (or of rules) that effectively protects personal privacy may also qualify for safe harbor benefits. In all instances, safe harbor benefits are assured from the date on which each organization wishing to qualify for the safe harbor self-certifies to the Department of Commerce (or its designee) its adherence to the Principles in accordance with the guidance set forth in the Frequently Asked Question on Self-Certification."
Safe Harbor was never ideal from the EU perspective since it relied almost entirely upon trust. US organizations who voluntarily attested that they complied with the additional privacy requirements under EU law (over and above those required under US law) were presumed to have all the relevant privacy and data security controls in place, qualifying them to handle personal data on EU citizens. As far as I know, there were no independent inspections or enforcement actions to speak of. In contrast, EU organizations are legally obliged to have a range of privacy and data security controls based on those originally specified back in 1980 by the OECD.

The end of Safe Harbor is a problem for EU organizations that depended upon it to absolve them of blame if personal data on EU citizens was inadequately secured by various US organizations communicating, storing and processing it on their behalf. Many websites, apps, cloud services and so forth run in US data centers, and a fair proportion of them handle personal data ... so it will be interesting to see what happens next. My guess is that some US data centers or related organizations will seek audits and certifications confirming that they do indeed have EU-style privacy and security controls in place, while others may well lose their EU customers.

Regards,
Gary (Gary@isect.com)

Oct 7, 2015

Security dashboard tips

Tripwire blog's The Top 10 Tips for Building an Effective Security Dashboard is an interesting collection of advice from several people. It's thought provoking, although I don't entirely agree with it.

Tip 2 'Sell success, not fear', mentions:
"For example, in the event that they cannot find personnel who come equipped with the skills needed to improve progress, security personnel can use dashboards to demonstrate the impact that well trained individuals could have on finding and resolving issues and threats, as well as to subsequently leverage that insight for training and cultivating available skills."
Although somewhat manipulative, metrics can indeed provide data supporting or justifying proposed security improvements, assuming that, somehow, someone has already decided what needs to be done ... and suitable metrics can be useful for that purpose too.

The thrust of tip 4 'Use compelling visualizations' is that the dashboard needs to be glossy: I agree dashboards should be professionally crafted and reasonably well presented but I feel their true value and utility has far more to do with the information content than the look.

Tip 9 'Thoroughly vet the information before it is presented' is an odd one. The advice to be ready to explain outliers and anomalies makes sense, but the implication of someone vetting the data before it goes to the dashboard is that it will be both delayed and sanitized. Hmmm.

Well, take a look for yourself and see what you make of the ten tips.

Sep 30, 2015

"Permissions", another novel security awareness topic

When a customer suggested that NoticeBored ought to cover privileges, we thought "Great idea!" ... but when we got stuck into the research for the new module, we soon realized that we couldn't really discuss privileges without also dipping into access rights ... which takes us into rights ... and compliance ... and a whole stack of other stuff. From being a narrow and specific topic, it mushroomed into an enormous beast, a far more complicated, wide-ranging awareness subject than we originally anticipated, taking in more than thirty aspects: access controls; access rights; accountability; authorization; awareness, education and training (!); compliance; controls; disclaimers; enforcement; entitlement; escalation; ethics; exceptions; exemptions; forensics; governance; granting, denying and revoking permissions; groups and rôles; identification and authentication; incident response and management; obligations and responsibilities; passes and ID cards; penetration and security testing; permits and licenses; policies, procedures and guidelines; privileges; prohibition; reinforcement; rights; risks; and trust.

We settled in the end for the innocuous, all-encompassing title "permissions". It would have been counterproductive to attempt to cover all those thirty-plus facets in great detail in one module so instead we picked out the few most relevant to each of the three awareness audience groups (staff, managers and professionals) and skimmed the rest ... for now, but then we've covered most if not all of them before and will do so again at some future point, thanks to picking a different infosec topic every month.


"Permissions" is the 57th topic in our bulging security awareness portfolio, and we're not finished yet! As far as we know*, no other commercial offering in this space is anything like as broad, nor indeed as deep. Concentrating on one topic at a time gives us the opportunity to explore things in some depth, gradually month-by-month completing the bigger picture. The monthly cycle also lets us reflect current issues and thinking, perhaps even advancing the field in our own little way. This month, for instance, we wrote a generic job description for a Permissions Manager, someone to take the lead on permissions, rights and privileges, coordinating and aligning the management of permission throughout the corporation. On reflection, how do large organizations get by without someone performing such an important role? Is this gap partly to blame for the Sony, Target, OPM and other recent headline incidents?  Hmmmm, makes you think, doesn't it?

If "awareness training" to you means an annual lecture to end-users about policies and passwords, you really should take a look at NoticeBored.com, drop me an email, or call the office. We'd love to help you take the next step.

Regards,

* If you know different, do please let me know. I'm always interested in what our competitors are getting up to. We don't have a monopoly on innovation! 

Sep 10, 2015

Metrics case study on Boeing


The Security Executive Council has published an interesting case study concerning the review and selection of metrics relating to physical and information risks at Boeing.  [Access to the article is free but requires us to register our interest.]

The case study mentions using SMART criteria and a few other factors to select metrics but doesn't go into details, unfortunately.  Nevertheless, the analytical approach is worth reading and contemplating.

If we were to conduct such an assignment for a client today, we would utilize a combination of tools and techniques across six distinct phases:

  1. Background information gathering concerning Boeing's business situation, information risks, and existing metrics, using standard analytical or audit methods, clarifying the as-is situation and building a picture of what needs to change, and why. This phase would typically culminate in a report and a presentation/discussion with management.

  2. GQM (Goal-Question-Metric) assessment eloquently described by Lance Hayden in IT Security Metrics. This is a more structured and systematic version of the approach outlined in the case study. A workshop approach would be useful, probably several in fact to delve into various aspects with the relevant business people and experts. The output would be a matrix or tree-root diagram illustrating the goals, questions and metrics.

  3. PRAGMATIC assessment and ranking of the metrics generated in phase 2 using the approach documented in our book. The output would be a management report containing a prioritized list of metrics ranked according to their PRAGMATIC scores, leading to a further presentation/discussion with management and, hopefully, agreement on a shortlist of the most promising metrics, those actually worth pursuing. This and the previous phase would take a creative approach, thinking about what needs to be measured, why, how, when etc., using both GQM and PRAGMATIC to firm-up the metrics that best fit the requirements  and focus groups to finalize the metrics (both existing metrics that are worth retaining possibly with some changes, and novel metrics being introduced).
  4. Planning and preparing for the implementation phase, perhaps including pilot studies.

  5. Implementation: making the changes needed to collect, analyse, report and most of all use the metrics.  This might well involve retiring or recasting some of the client's existing metrics that haven't earned their keep, in a way that teases out the last dregs of value from the data gathered previously.
  6. Ongoing metrics management and maintenance: using information from the GQM and PRAGMATIC steps to monitor and if appropriate refine or replace the metrics, ensuring for instance that they are proving valuable to the business (i.e. they should be cost-effective - one of the PRAGMATIC criteria conspicuously absent from SMART).  
In parallel with that sequence would be conventional project management activities - planning, resourcing & team building, motivation, tracking, reporting and assignment risk management.

Sep 8, 2015

BYOT - Bring Your Own Things - and BYOS

Employees are increasingly using their personally-owned ICT devices at work, whether for personal or work purposes.  Organizations with BYOD (Bring Your Own Device) schemes and policies typically insist that employee's smartphones, laptops, tablets etc. are secured and managed by IT, requiring the use of MDM (Mobile Device Management) software, AV (antivirus) etc.

So what happens as employees start bringing in their personal IoT toys (BYOT - Bring Your Own Things) in the same way - their fitness trackers, Google Glasses and other wearables, perhaps control pods for their home IoT systems, and so forth?  

Good luck to anyone trying to insist that IT installs MDM, AV and all that jazz on a gazillion things!

One approach to BYOT security I guess is to prohibit all unapproved and unauthorized devices/things from connecting to corporate networks, at the same time preventing corporate devices/things from connecting to non-corporate networks (including ad hoc or mesh networks formed spontaneously between IoT devices, and public networks such as open WiFi, Bluetooth and cellular networks).  Keep them logically separated, with strict enforcement using compliance measures, change and configuration management, network and device/thing security management and monitoring etc. (oh oh, I see dollar signs ticking up at this point).

Another approach is to deperimiterize - stop relying on network perimeter access controls, depending on device/thing security instead.  Treat all networks as untrustworthy if not overtly hostile.  Easy to say, tricky to do properly.

A third way involves the corporation providing open-access/public unsecured networks on its premises and encouraging employees to use those if they want to network their BYOS*.   This has the advantage of logical separation at low cost, while employees (and contractors, consultants, visitors and assorted drifters) can connect up without the cost of 3G or other public networks.  There may be legal wrinkles to this approach

Regards,

"Bring Your Own Stuff" is the polite version, "Bash Your Old Ship" is slightly closer to the real definition.

Sep 5, 2015

Banks: watch out for fishing (and phishing)


A low-tech kiwi bank robber stole deposits from a bank's safety deposit box using a fishing line.  He even managed to cash a few of the stolen cheques before being lured to the counter and caught in the bank's security net.

Not a malicious URL in sight.

An anonymous source tells me she has found deposit envelopes containing valuable negotiables (the folding kind) in a local bank's deposit drawer, left by a previous customer who neglected to check that the deposit envelope had been swallowed up by the machinery.  The bank teller was aghast ... but evidently creating a physically secure bank deposit chute is beyond the capabilities of NZ bank' engineering wizzards.  Surely some number 8 wire and a bent waratah ought to do it?  

Anyway, most kiwis are far too honest to exploit vulnerabilities like this.

Regards,

Sep 2, 2015

Drone-zapping


I spotted something interesting, if a little scary, today on the BBC. Boeing has successfully shot down 'a drone' by zapping it with a transportable high-power laser system on a test range.

The article doesn't actually say but I guess this is a straightforward military weapon intended to defend, say, a battlefield camp against the enemy's military drones that approach or overfly it. It would, of course, need to distinguish friendly drones (and aircraft and shells ... and soldiers and land vehicles ...) from foe in order to avoid costly and embarrassing incidents, all in real time as things (perhaps several) fly towards or past the zapper, the more sophisticated ones running radar jammers etc. If you think about the complexities of the situation and the necessary speed of target acquisition, identification, decision making and response, it is an impressive weapon.

I guess in due course, simpler civil versions of the weapon might prove valuable to defend public buildings (such as airports, parliaments, embassies, prisons and homes of the rich-n-famous) against drone 'attacks'.

Perhaps this explains the popularity of the 'laser kiwi' flag option with the people of NZ, if not our highly-paid government-sponsored flag committee?

Regards,

Sep 1, 2015

IoT security awareness



The Internet of Things is a novel and rapidly evolving field making IoT security highly topical and yet, as with cybersecurity last month, it was something of a challenge to prepare a coherent, concise and valuable set of security awareness materials. 
In researching the topic, we discovered surprisingly few companies marketing various smart and mostly geeky things, a few news articles and lightweight gee-whizz journalistic pieces, and some almost impenetrable academic and technical papers about the technologies. Enterprising hackers are already exploring IoT, discovering and exploiting security vulnerabilities ostensibly for education and demonstration purposes, at least for now. Shiny new things are appearing on the market every week to be snapped up by an eager if our naïve public.
IoT presents a heady mix of risks and opportunities, with substantial commercial, safety, privacy, compliance and information security challenges ahead, and sociological implications for good measure. In a few years’ time when both things and IoT incidents have become commonplace (despite our very best efforts!), we may look back in amazement at the things we are doing today … but we are where we are, things are spreading fast and the risks are multiplying like salmonella on a Petri dish.

An IoT security awareness module is timely.

To prepare the materials, we took a back-to-basics approach, identifying and describing a wide range of risks associated with or arising from IoT as a starting point. For the staff stream, we focused on consumer things including smart home and wearables. For management, we discussed the commercial, strategic and policy concerns with IoT and IIoT (Industrial IoT). While it would have been easy just to highlight the security and privacy angle, we also discussed the business opportunities that arise from innovative things. Finding the right balance between risk and opportunity, or security and creativity, is the key to exploiting the amazing possibilities of these exciting new technologies.

September’s NoticeBored module addresses the following generic learning objectives: 
  • Introduce IoT, an emerging and rapidly evolving field, explaining things, ubiquitous computing, mesh networks, IIoT and so forth; 
  • Outline the personal and business benefits driving IoT and IIoT adoption, touching on commercial opportunities, industry pressures and technology constraints plus wider societal issues, privacy concerns and so on; 
  • Explain the information risks arising from or relating to IoT & things, illustrating the threats, vulnerabilities and impacts with news of real-world IoT incidents, attacks and malware; 
  • Emphasize the four possible means of treating the risks (more than just security controls!);
  • Encourage the workforce to consider and ideally address the information risks, security and privacy aspects of IoT and things, going beyond mere ‘awareness’. 
IoT security is the 56th topic in our steadily growing portfolio of information security awareness materials. We're already working on another new topic for next month: 'rights and privileges' are core to IT security, crucial to logical access management, and important concepts in a much broader sense.

Could your security awareness program could do with a kick up the wotsits? Wish you had the time and energy to research and write about emerging information security challenges? With 56 information security topics covered already and more on the way, there's sure to be something right up your street. Email me to evaluate and subscribe to the NoticeBored service. How can we help?


Regards, 
Gary (Gary@isect.com)

Aug 21, 2015

Lean security

Lean manufacturing or kaizen is a philosophy or framework comprising a variety of approaches designed to make manufacturing and production systems as efficient and effective as possible, approaches such as:
  • Design-for-life - taking account of the practical realities of production, usage and maintenance when products are designed, rather than locking-in later nightmares through the thoughtless inclusion of elements or features that prove unmanageable;
  • Just-in-time delivery of parts to the production line at the quantity, quality, time and place they are needed (kanban), instead of being stockpiled in a warehouse or parts store, collecting dust, depreciating, adding inertia and costs if product changes are needed;
  • Elimination of waste (muda) - processes are changed to avoid the production of waste, or at the very least waste materials become useful/valuable products, while wasted time and effort is eliminated by making production processes slick with smooth, continuous, even flows at a sensible pace rather than jerky stop-starts;
  • An obsessive, all-encompassing and continuous focus on quality assurance, to the extent that if someone spots an issue anywhere on the production line, the entire line may be stopped in order to fix the root cause rather than simply pressing ahead in the hope that the quality test and repair function (a.k.a. Final Inspection or Quality Control) will bodge things into shape later ... hopefully without the customer noticing latent defects;
  • Most of all, innovation - actively seeking creative ways to bypass/avoid roadblocks, make things better for all concerned, and deliver products that go above and beyond customer expectations, all without blowing the budget.
Service industries and processes/activities more generally can benefit from similar lean approaches ... so how might kaizen be applied to information risk management and security?
  • Design-for-security - products and processes should be designed from the outset to take due account of information security and privacy requirements throughout their life, implying that those requirements need to be elaborated-on, clarified/specified and understood by the designers;
  • Just-in-case - given that preventive security controls cannot be entirely relied-upon, detective and corrective controls are also necessary;
  • Elimination of doubt - identifying, characterizing and understanding the risks to information (even as they evolve and mutate) is key to ensuring that our risk treatments are necessary, appropriate and sufficient, hence high-quality, reliable, up-to-date information about information risk (including, of course, risk and security metrics) is itself an extremely valuable asset, worth investing in;
  • Quality assurance applies directly - information security serves the business needs of the organization, and should be driven by risks of concern to various stakeholders, not just 'because we say so';
  • Innovation also applies directly, as stated above.  It takes creative effort to secure things cost-effectively, without unduly restricting or constraining activities to the extent that value is destroyed rather than secured.

Aug 18, 2015

Persistently painful piss-poor password params & processes

Let me start by acknowledging that passwords are a weak means of authenticating people, for all sorts of reasons. I know passwords suck ... and yet passwords are by far the most common user authentication method in use because of two factors (pun intended):
1) Passwords are conventional, well-understood, commonplace, and the natural default 'no-brain' option. People are used to them and [think they] understand them. Passwords or PIN codes are almost universally built-in to operating systems and many apps, websites etc. 
2) Compared to other methods, passwords are faily cheap to implement, manage and use. There is no need to invest in biometric sensors, PKI, crypto-tokens or whatever unless you need multifactor authentication ... in which case you probably still need passwords. 
That said, there are many different ways of employing passwords for user authentication, many design parameters, most of which affect the level of security achieved in practice. Designing and implementing relatively strong password authentication mechanisms is not nearly as trivial as it may appear to the untrained eye.

Take for example eBay and PayPal, formerly one company but now split. Given their common origin, one might have thought they would have similar approaches to passwords, and indeed they do. They both suck.

Both sites make it a mission even to find the 'change password' option in the first place. 
On eBay, there is nothing as obvious as a "Change password" menu option or button, oh no, that would be far too easy. After hunting around for a while, I eventually discovered the requisite option tucked away under 'Hi Gary!' --> 'Account settings' --> 'Personal information' --> 'Edit' the password line.
On PayPal, once again there is nothing as obvious as a "Change password" option/button. It is in fact in under  'My account' --> 'Profile' --> 'My personal info' --> 'Change' the password line.
It is almost as if the eBay and PayPal IT teams have conspired to make their processes different. Are there good reasons, I wonder, why we have to 'edit' on eBay but 'change' on PayPal, or why it's 'account settings' on one but 'profile' on the other?  ... Or do you think perhaps nobody even bothered to check what the other was using?

The mission continues once we have found the password change function, since the password change mechanisms also differ: 
eBay first of all requires me to login again (since, I guess, the persistent eBay session may have been taken over by someone else), then to enter my old password, then the new password twice.
PayPal first of all requires me to enter my credit card number (in effect, a second password) then gives me the option to change either my password or my 'security questions', then to enter my old password, then the new password twice. 
Furthermore, the two sites define valid passwords differently.
The rules for valid eBay passwords are summarized in a tooltip ... 

... and separately, in more detail, in a pop-up help window:  

... which is fair enough.  There's plenty of advice there and the restrictions are sensible, although it is not clear whether the password is case-sensitive (I guess it is but it doesn't actually say so).
In contrast, valid PayPal passwords appear to be solely defined by a simple tooltip:
If there is any more detailed information on valid PayPal passwords, it is so well hidden that I can't find it, despite searching within help.
I don't know why PayPal restricts passwords to a maximum of 20 characters (quite long for a classic password yet too short for a decent passphrase) but perhaps it is a good thing since, most annoyingly of all, PayPal requires me to enter my new password, twice, manually: I am prevented from pasting in a very complex password generated by my password manager software. Consequently, I have two lame choices:
  1. I can think up a classic memorable password, type it in twice to the website then a third time to my password manager. This restricts the complexity of my password to one I can think up, remember and type easily, negating a large part of the value of using a password manager to generate long, complex passwords;
  2. I can generate a random complex password in the password manager, type it in twice to the website then paste it into my password manager. In practice, I can either mess around with window positions or write down the password on paper since the password generator function popup disappears when I go to enter it into the website - and there's an even greater chance of me mistyping a complex password at least once out of the two times I have to enter it.
So far, I have only commented on the 'change password' function from my perspective as a user of these two related websites, pointing out arbitrary differences in the menu choices, terminology, process and password parameters, and factors that make it quite hard to use long complex passwords. Curiously, despite being a banking/financial services company, PayPal's password rules restrict the maximum length of a password to just 20 characters whereas eBay allows a maximum of 64, hence a lot more entropy [I can't be bothered to figure how much more: I'll leave it as an exercise for the attentive reader]. 

The 'forgotten password' processes are also different, and I strongly suspect the ways these two sites hash and store the passwords also differ, behind the scenes. Even the way the sites inform users that their passwords have beenh changed differ. There are still other password security aspects I haven't checked, for instance how many invalid password attempts are allowed, what happens once the limit is reached, what other information from the user's system/browser is used as part of the authentication, and whether either site block simple SQL injection attacks ... because I'm not a hacker and it's not my job. 

Aside from the specifics, the more general point is that despite these two sites coming from common stock, there are substantial but seemingly arbitrary differences in practically identical functions. Now consider all the other gazillion websites and apps Out There, each with their own password parameters, processes and constraints. There are no universal methods for users to manage our passwords, and limited consensus even on the minimal password requirements (in my experience, few sites today accept passwords of less than 6 characters being both letters and digits ... but some do).

Given how commonplace they are, isn't it odd that there are no generally-accepted global standards regarding passwords? Perhaps I should suggest just such a standard to ISO/IEC JTC 1/SC 27 for inclusion in the ISO27k suite - what do you think? It's not hard to envisage a standard giving advice on aspects such as password parameters, password change functions, password storage etc., along with the risk- and business-driven design, testing and implementation of password authentication and related processes. It might even be possible to come up with a limited suite of cases demonstrating the main functions in conformance with the standard, with a consistency so obviously lacking in practice today albeit perhaps with high/medium/low security variants for the corresponding risk levels. More than enough guts there for an ISO27k standard, I'd say, with further standards covering multifactor authentication, biometrics, PKI-based digital certificate approaches etc.

Meanwhile, think about your own organization. Do you currently have policies, procedures, standards and guidelines laying out consistent methods of user authentication, password management etc. for your systems and apps? Do your systems re-use properly defined, designed, developed and proven parameterized password functions, or indeed security functions as a whole? Do you even consider these issues when selecting commercial apps? Or are you happy to continue compromising your security and make your users' lives a misery (not to mention the long-suffering Helpdesk)?

Regards,