Welcome to the SecAware blog

I spy with my beady eye ...

29 May 2022

Algo-rhythmic infosec

An article by the 50-year-old University of York Department of Computer Science outlines algorithmic approaches in Artificial Intelligence. Here are the highlights:

  • Linear sequence: progresses directly through a series of tasks/statements, one after the other.
  • Conditional: decides between courses of action according to the conditions set (e.g. if X is 10 then do Y, otherwise do Z).
  • Loop: sequential statements are repeated. Sequential statements are repeated.
  • Brute force: tries approaches systematically, blocking off dead ends to leave only viable routes to get closer to a solution.
  • Recursive: apply the learning from a series of small episodes to larger problems of the same type.
  • Backtracking: incrementally builds a data set of all possible solutions, retracing or undoing/reversing its last step if unsuccessful in order to pursue other pathways until a satisfactory result is reached.
  • Greedy: quickly goes to the most obvious solution (low-hanging fruit) and stops.
  • Dynamic programming: outcomes of prior runs (solved sub-problems) inform new approaches.
  • Divide and conquer: divides the problem into smaller parts, then consolidates the solutions into an overall result.
  • Supervised learning: programmers train the system using structured data, indicating the correct answers. The system learns to recognise patterns and hence deduce the correct results itself when fed new data.
  • Unsupervised learning: the system is fed unlabeled (‘raw’) input data that it autonomously mines for rules, detecting patterns, summarising and grouping data points to describe the data set and offer meaningful insights to users, even if the humans don’t know what they’re looking for.
  • Reinforcement learning: the system learns from its interactions with the environment, utilising these observations to take actions that either maximise the reward or minimise the risk.

Aside from computerised AI, we humans use similar approaches naturally, for instance when developing and implementing information security policies:

  • Linear sequence: start with some sort of list of desireable policies, sorted in some manner, working down from top to the bottom.
  • Conditional: after a policy is completed, decide which one to draft next according to the organisation's priorities or hot topics at that point.
  • Loop: standardise (and perhaps document) the process for developing policies, using it repeatedly and systematically for each new one.
  • Brute force: discover by trial-and-error which policy development approaches work best, avoiding the least effective ones.
  • Recursive: start by preparing relative simple, straightforward policies, stabilising and refining the process and gradually progressing, building up to more complex, difficult policies.
  • Backtracking: proactively review the policy development process after each policy or batch is completed, identifying and applying any learning points to the next policy or batch, if necessary starting over.
  • Greedy: just get on with it!  Generate, plagiarise or plain steal some rough and ready basic policies and move on, as soon as possible.
  • Dynamic programming: review the current suite of policies to distinguish the good, the bad and the ugly, refining the plans and approaches for developing further policies accordingly.
  • Divide and conquer: carve-up the policy landscape among multiple people or functions, tasking them to prepare their parts of the whole.
  • Supervised learning: analyse published policies for useful clues about how to develop good policies.
  • Unsupervised learning: simply start developing policies and let the process evolve and mature naturally over time.
  • Reinforcement learning: proactively measure or solicit feedback from various stakeholders about the quality and effectiveness of the policy suite in order to improve the approach, perhaps with periodic reviews/audits to capture learning points and identify improvement opportunities.

There are yet other possible approaches, hinting perhaps at further AI algorithms:

  • Reconsider the fundamental issues: despite being commonplace, consider whether 'policies' are, in fact, the best way of mandating information security and related rules within the organisation. Explore and reconsider the underlying objective/s, perhaps searching for alternative or complementary approaches such as procedures, guidelines, training, oversight, supervision, guidance, mentoring, technical standards and controls, implicit/explicit trust etc.
  • Bend the rules: allow individual business units, departments or teams to go their own way, modifying corporate policies to some extent, watching closely to keep them broadly in line and ideally spot approaches worth adopting more widely.
  • Break the rules: deliberately adopt radically different approaches to policy development, novel styles or formats for the policies etc., perhaps in a safe situation such as a policy wiki pilot study in single business unit.
  • Leave it to the experts: don't even bother trying to learn, either commission a policy expert to develop, or simply purchase, information security policies from someone who has already figured it out.
At some point in the not-too-distant future, the AI robots will be more than capable of developing policies for us lowly humans. Meanwhile we have the opportunity to figure out better - more effective, more efficient - ways of doing this stuff for ourselves with a bit of lateral thinking about the process, as opposed to mindlessly doing whatever we normally do, as we've learnt, or as suggested by some random bloke in a blog piece. 
I hope I have inspired you to think about how you go about routine activities, using information security policy development simply as an illustrative example. As you drop back into the humdrum rhythm of your routine daily tasks, allow yourself the odd moment's quiet reflection to consider alternative approaches and look for learning points. Just as the AI robots are learning from us, we can learn from them.

26 May 2022

Iterative scientific infosec


Here's a simple, generic way to manage virtually anything, particularly complex and dynamic things:
  1. Think of something to do
  2. Try it
  3. Watch what happens
  4. Discover and learn
  5. Identify potential improvements
  6. GOTO 1

It's a naive programmer's version of Deming's plan-do-check-act cycle - an iterative approach to continuous improvement that has proven very successful in various fields over several decades. Notice that it is rational, systematic and repeatable.

Here's a similar grossly-simplified outline of the classical experimental method that has proven equally successful over several centuries of scientific endeavour:

  1. Consider available information
  2. Propose a testable hypothesis
  3. Test it (design and run experiments)
  4. Watch what happens
  5. Discover and learn
  6. GOTO 1

Either way, I'm a committed fan. The iterative approach with incremental improvements, works well. I approve.

Along the way, aside from pushing back the frontiers of science and technology and achieving remarkable advances for human society, we've also learned about the drawbacks and flaws in the processes, and we've developed assorted mechanisms to reduce the risks and increase our chances of success e.g.:

  1. Key to 'improving' or 'advancing' is to be able to recognise and ideally measure the improvement or advance - in most cases anyway. Improvements or advances that happen purely by chance ('discoveries') are welcome but rare treats. A big issue in quality assurance is the recognition that there are usually several competing and sometimes contradictory requirements/expectations, not least the definition of 'quality'. For certain customers, a rusty old heap of a car discovered in a barn is just as much the 'quality vehicle' as a Rolls Royce to its customers. Likewise, security improvements depend on one's perspective. For hackers, exposing exploitable vulnerabilities improves their chances of breaking in: 'improvement', for them, means weaker security!

  2. Various forms of 'control' are important to stabilise situations and gain assurance that whatever actually happens is the anticipated result of whatever changes we have made, rather than some factor that we probably hadn't even appreciated - which itself can be valuable knowledge. In a sense, there's no such thing as a failed experiment or test provided we still learn something useful from it. A lot of innovation involves figuring out what doesn't work (such as enforced periodic passsword changes: we tried it, it didn't help, move on). 

  3. 'Consider', 'discover' and 'learn' are all about being open to new knowledge, climbing on the shoulders of the giants that came before us and hopefully reaching ever higher. Again, assurance is part of that: to what extent can we trust the information at hand? How reliable is any new knowledge we gain? What can/should we do to be more certain that things are going the right way? Knowledge sharing is another factor. The community as a whole benefits by sharing and collaborating, even though individuals might benefit more by selfishly withholding information. There is a strong argument to facilitate much more sharing of information about information risk and security, incidents, controls etc. - perhaps something similar to the airline industry where open disclosure of issues is encouraged and facilitated in order to protect lives and increase trustworthiness. It's another angle on responsible disclosure.

  4. Small changes are generally far less risky than large ones, although sometimes major advances require risky step-changes.

  5. Given that we cannot be absolutely certain of making improvements and advances, 'planning to fail' is an integral part of the process ... and yet failure is yet another valuable opportunity to learn and improve (provided we survive!).

So, this morning I've been thinking about the applications of those principles and mechanisms to information risk management, putting infosec under the microscope.

  1. 'Improving' or 'advancing' infosec is more involved than it seems. It is typically described in terms of reducing the probability and/or impacts of adverse incidents, but digging deeper, those terms are unclear. The probabilities of incidents occurring in future can generally only be estimated within a finite timescale, and the impacts are equally hard to predict and measure. Security metrics is, at best, an immature field. It is not even straightforward to define 'adverse incidents': adverse to whom, in what sense? And what are 'incidents', in fact?

  2. The controls I am talking about in point 2 are process controls rather than typical information security controls. So, for instance, when some bright spark decides to introduce a new corporate infosec policy on, say, responsible disclosure, what can/should be done to measure the improvement achieved? Once more, that's tough to answer. As I've just said, it's not easy to define what 'improve' even means in this context, and yet without that we stand little chance of measuring or driving it. Reasonably clear objectives are the best starting point when designing or selecting metrics.

    [ASIDE: there's a little learning point here. Shouldn't we at least try to clarify what our infosec policies are intended to achieve, preferably checking that out and adjusting things accordingly? Hmmmmmmm, more thinking required. If I make any headway, I'll pick up this loose end in a future blog piece.]

  3. Re 'consider', 'discover' and 'learn', I'll make the general point that infosec management, as with any form of management, is a rational undertaking. It requires thoughtful strategising, intelligent decision-making, appropriate governance. It revolves around and is crucually dependent upon information, a blend of objective and subjective. There are several substantial information risks associated with 'management' ... and yet they are notably absent from any corporate risk registers that I've seen (or managed or contributed to!) to date - potentially a widespread blind-spot and a serious omission. I mentioned the need for 'assurance' in relation to management information, one of several information integrity controls which are, thankfully, more commonly employed than the risks are identified, begging questions about whether and how such controls were ever justified. Post-incident reviews plus ISO/IEC 27001's ISMS management reviews and internal audits are examples of assurance measures relating to infosec, similar to peer reviews of scientific papers. Confidentiality controls are even more common, while certain availability controls tend to be highly valued after management information systems have failed although I'm sure more could/should be done in advance.

    [ASIDE: another learning point. What are the information risks associated with an ISMS? What mitigating controls are appropriate to protect and allow legitimate exploitation of management information, aside from those required/suggested by '27001?]

  4. Most security improvements are minor and incremental in nature - little tweaks or adjustments to an existing system or suite of controls, with little associated risk (as far as we know: they are seldom even considered). More significant improvements (such as the adoption of new security systems, not least an ISMS) may be more risky, hence those risks really ought to be identified, evaluated and treated in the conventional manner. While 'project risks' associated with system implementation/change projects or initiatives are commonly managed (well, OK, I should say they are selectively tracked, maybe reported, and perhaps mitigated with project management process controls), broader information risks arising from the new/changed systems may not be. For example, regular updates to antivirus systems plus security patching in general typically involve new software being provided to the organisation by the vendors. Some mature, security-conscious organisations run regression and security tests of some sort before deploying such updates but I'm sure most don't because they lack the time, resources, ability and will to do so. Who among us takes care of the associated information risks? Did you identify, evaluate and start treating the risks when the corresponding systems were originally implemented? What about current and future implementations - including all those cloud systems being changed for business reasons other than information security? What about process changes, new suppliers, new employees (especially managers and others with significant responsibilities and powers), new whatevers? How do you ensure that information risks are duly identified, evaluated and treated appropriately across the board, for all substantial changes - not just the IT stuff?

  5. I've already mentioned assurance controls such as post-incident reviews, ISMS management reviews and ISMS internal audits. There are also myriad control-failure-controls in the form of 'security-in-depth', such as multiple overlaid layers of access controls protecting valuable information assets. There are even resilience, recovery and contingency controls: a lot of business continuity falls into this area. However, a substantial problem remains due to the paucity of detective controls. We often don't know about infosec incidents until impacts have grown noticeable, by which time the damage is ongoing. So, knowing that, we should of course redouble our efforts to improve security monitoring and incident detection, while also acknowledging that we are likely to continue discovering incidents-in-progress, hence we cannot afford to give up on our reactive incident responses. 
Quite a lot to process there. Think on.

21 May 2022

Responsible disclosure - another new policy

We have just completed and released another topic-specific information security policy template, covering responsible disclosure (of vulnerabilities, mostly).

The policy encourages people to report any vulnerabilities or other information security issues they discover with the organisation's IT systems, networks, processes and people. Management undertakes to investigate and address reports using a risk-based approach, reducing the time and effort required for spurious or trivial issues, while ensuring that more significant matters are prioritised.

The policy distinguishes authorised from unauthorised security testing, and touches on ethical aspects such as hacking and premature disclosure.

It allows for reports to be made or escalated to Internal Audit, acting as a trustworthy, independent function, competent to undertake investigations dispassionately. This is a relief-valve for potentially sensitive or troublesome reports where the reporter is dubious of receiving fair, prompt treatment through the normal reporting mechanism - for instance, reporting on peers or managers.

It is primarily intended as an internal/corporate security policy applicable to workers ... but can be used as the basis for something to be published on your website, aimed at 'security researchers' and ethical hackers out there. There are notes about this at the end of the template. To be honest, there are plenty of free examples on the web but few if any are policies covering vulnerability disclosure by workers.

All that in just 3 pages, available as an MS Word document for $20 from SecAware.com.

I am working on another 2 new topic-specific policies as and when I get the time. Paradoxically, it takes me longer to prepare succcinct policy templates than, say, guidelines or awareness briefings. I have to condense the topic down to its essentials without neglecting anything important. After a fair bit of research and thinking about what those essentials are, the actual drafting is fairly quick, despite the formalities. Preparing new product pages and uploading the templates plus product images then takes a while, especially for policies that relate to several others in the suite - which most do these days as the SecAware policy suite has expanded and matured. As far as I know, SecAware has the broadest coverage of any info/cybersec policy suite on the market.

... Talking of which, I plan to package all the topic-specific policies together as a bulk deal before long. Having written them all, I know the suite is internally consistent in terms of the writing style, formatting, approach, coverage and level. It's also externally consistent in the sense of incorporating good security practices from the ISO27k and other standards.

18 May 2022

Hacking the Microsoft Sculpt keyboard

In its infinite wisdom, Microsoft designed data encryption into the Sculpt wireless keyboard set to protect against wireless eavesdropping and other attacks. The keyboard allegedly* uses AES for symmetric encryption with a secret key burnt into the chips in the keyboard's very low power radio transmitter and the matching USB dongle receiver during manufacture: they are permanently paired together. The matching Sculpt mouse and Sculpt numeric keypad use the same dongle and both are presumably keyed and paired in the same way as the keyboard.

This design is more secure but less convenient than, say, Bluetooth pairing. The risk of hackers intercepting and successfully decoding my keypresses wirelessly is effectively zero. Nice! Unfortunately, the keyboard, keypad and mouse are all utterly dependent on the corresponding USB dongle, creating an availability issue. Being RF-based, RF jamming would be another availability threat. Furthermore, I'm still vulnerable to upstream and downstream hacking - upstream meaning someone coercing or fooling me into particular activities such as typing-in specific character sequences (perhaps cribs for cryptanalysis), and downstream including phishers, keyloggers and other malware with access to the decrypted key codes etc.

So yesterday, after many, many happy hours of use, my Sculpt's unreliable Ctrl key and worn-out wrist rest finally got to me. I found another good-as-new Sculpt keyboard in the junkpile, but it was missing its critical USB dongle. The solution was to open up both keyboards and swap the coded transmitter from the old to the new keyboard - a simple 20 minute hardware hack.

In case I ever need to do it again, or for anyone else in the same situation, here are the detailed instructions:

  1. Assemble the tools required: a small cross-head screwdriver; a stainless steel dental pick or small flat-head screwdriver; a plastic spudger or larger flat-head screwdriver (optional); a strong magnet (optional).
  2. Start with the old keyboard. Peel off the 5 rubber feet under the keyboard, revealing 5 small screws. Set the feet aside to reapply later.
  3. Remove all 5 screws. Note: the 3 screws under the wrist rest are slightly longer than the others, so keep them separate.
  4. Carefully ease the wrist rest away from the base. It is a 'snap-fit' piece. I found I could lever it off using my thumbs at the left or right sides, then gradually work around the edge releasing it. You may prefer to use the spudger. It will flex a fair bit but it is surprisingly strong.
  5. Under the wrist rest are another 16 little screws. Remove them all, including the two recessed screws near the hump/gap in the middle of the keyboard. Use the magnet to lift out the screws if that helps.
  6. Separate the base of the keyboard from the key unit by working right around the edge with the spudger, gently levering it apart. Like the wrist rest, it is a snap-fit and stronger than it looks. 
  7. As the two parts separate, gently pull the battery connector cable from the circuit board inside: it has a small white push-fit connector.
  8. Remove the two screws from the circuit board.
  9. Using the dental pick, ease the black plastic strip aside from the long white connector to release the ribbon cable pinched underneath.

  10. Remove the circuit board.
  11. Dismantle the newer keyboard in the same way.
  12. Replace the circuit board from the new keyboard with the circuit board from the old one.
  13. Replace the ribbon cable into the connector, then ease the black plastic strip back into place to hold it firm.
  14. Replace the two screws holding the circuit board.
  15. Put the two parts of the keyboard together, connecting the battery cable to the circuit board as you do. The white power plug is keyed and should only go in one way around as shown here, with the black wire closest to the black IC:

  16. Before proceeding, feel free to check that the new keyboard works with the original USB dongle.
  17. Complete the reassembly by snapping the two parts of the keyboard back together all the way around the edge. 
  18. Reinstall the 16 screws from under the wrist rest.
  19. Snap the wrist rest back into place, checking that it is fully home all the way around.
  20. Replace the 5 screws under the feet: remember those 3 longer ones under the wrist rest.
  21. Replace the feet.  If the glue isn't very sticky, apply fresh glue e.g. UHU clear adhesive, to avoid the keyboard becoming lopsided.
  22. Optionally, recover and save the screws, keycaps, plastic spring units, wrist rest and rubber feet from the old keyboard to repair/replace them on the new keyboard as they wear out (see below). Oh and those silver discs embedded in the black pastic base are strong magnets to hold the keyboard ramp in place: if you choose to recover them for other projects, you will need tools to break apart the dark grey ABS 'engineering plastic', knowing that it can fracture into sharp shards. Take care!
Being some of the most common letters in English, the AERT keys always seem to wear out fastest for me and the space key is noticeably shiny, along with the backspace for some raeson. After >4 decades' practice, I can almost touch-type so wearing away the key legends should not be a problem ... except when I'm tired and emotional anyway. More annoying are those few intermittent keys, caused by dirt getting under the keycaps and into the switches beneath. 
Also, the extra-wide keys on the Sculpt sometimes go wonky, staying down on one side or the otehr. Removing any of the keycaps is easy enough: lever up a corner using the dental pick, then lift the cap off using your fingernail. It is a snap-fit. Underneath, you'll find a distinctly unhygienic accumulation of dust, hair and al-desko lunch crumbs: brush them gently away, trying to avoid breathing in any more pathogens.
Here's the disgusting view under one of the well-used Ctrl keys:

A: One of two stainless steel support rods is held in this pair of metal loops, and is clipped to the keycap, keeping it level.
B: A smaller stainless steel rod fits to these loops, and is also clipped to the keycap. In this pic, I have put the dental pick tip through a loop from the opposite side.
C: These are plastic scissor-action 'springs' that also clip to the keycap (see below). They are small and fragile.
D: The key's microswitch is under this central silicone rubber dust cover. Check that the dustcover over the microswitch and any surrounding black rubber pad are intact and not torn. If they are, the keyboard is probably stuffed: dust will undoubtedly work its way in to interfere with the switch action, if it hasn't already.

If the 'springs' are in separate pieces or obviously broken, replace them with good ones of the same size from your stash of bits (step 22).
Being in two halves and even bigger than the Ctrl key, longer support rods under the space bar are attached either side:
Check that the plastic spring units (and support bars if applicable) are intact and in place. If these are broken or bent, replace them from your stash from the previous Sculpt keyboard (step 22), replace the metal bars into their hoops, then pop the keycaps into place and hope they work better now. Most of all, hope they work at all! If not, too bad. It is probably time to replace your worn-out keyboard after all.
* I say 'allegedly' because there is no easy way for me to check the claim. Doubtless with a little effort, I could monitor the RF transmissions and perhaps capture and decode the digital bit-stream, but then proving that the system is or is not using AES would be harder, practically impossible for me given my rudimentary knowledge of cryptanalysis.  I suppose I could check the randomness of the encrypted data statistically, looking for patterns that correlate with the letter frequencies. Message headers and structures might be clues. I could try brute force attacks ... or not bother.

15 May 2022

What actually drives information security?

The 'obvious' driver for information security is information risk: valuable yet vulnerable information must be secured/protected against anything that might compromise its confidentiality, integrity or availability, right? Given an infinite array of possible risks and finite resources to address them, information risk analysis and management techniques help us scan the risk landscape for things that stand out - the peaks - and so we play whack-a-mole, attempting to level the field through mitigating controls, remainingly constantly on the lookout for erupting peaks and those hidden behind the ones we can see or were otherwise transparent.

That's 'obvious' from my perspective as an experienced information risk and security professional, anyway. Your perspective probably differs. You may look at things from a slightly or dramatically different angle - and that's fine. I see these as interesting and stimulating complementary approaches, not alternatives.

Compliance with laws and regulations, for instance, is a strong driver in some cultures and organisations. Quality, efficiency and effectiveness drive others. Some seek to apply good practices, joining the pack. Customer-centric businesses naturally focus on customer satisfaction, brand values, loyalty etc. Startups are concerned to grow rapidly, hence anything that is or might become a barrier is a target. Government organisations, charities, professional services organisations, utilities, schools, assorted industries etc. all have their own focal points and concerns. Profits are clearly important for commercial organisations, but there are other financial measures too - and indeed many other things to measure. Information risk and security is incidental or supportive for most of them, enabling for some and essential for a select few whose business is information security, or the enlightened (as I like to call them).

So, in your own situation,  consider the business perspective. What does management want/expect out of information security? Along with what they do not want or expect to avoid, these are worthwhile aspects to explore.

For answers, study your organisation's mission statement, values, strategic objectives, its marketing and promotional activities etc. for clues about aspects that involve, build upon or demand information security. 

Then think carefully about all the humdrum routine operational things that also depend on information security. Controls are implicit for financial accounting and IT, for instance. Engineering utterly depends on information integrity, and trust is a major part of almost everything. For a long time, information security has been an endemic part of business, and indeed life (check out the amazing range of sounds, smells and gestures in nature, the mimics and warning colours, the chemical messenging that pre-dates IT and the Internet by, oooh, two or three billion years).

Look towards the top of your corporate risk register for massive hints about what most worries the execs ... and if there isn't a risk register, or if it is inaccurate, incomplete, out-of-date, biased or generally shoddy, that also tells you something about attitudes and expertise in this area - as well as being a clear opportunity for improvement.

Compare departmental budgets and project funding. What is management actively supporting at the moment and lately? How have business priorities changed over the past year or more? Which areas of the business are under the most pressure, and why?

If you can, study the exec management team and board of directors' meeting agendas for the past few months. Better still, ask those who were there about what's hot.

Step way back to contemplate what kinds of corporate information and processes are of most value (what's critical) and what is most vulnerable or under threat. For a healthcare company, guess what: it's probably health-related data. For a media company, maybe topical news stories and data used for historical research. Intellectual property is likely to be high on a list of information assets for creative, innovative organisations. Critical national infrastructure organisations focus on doing whatever it takes to 'keep the lights on'. Professional services companies value their knowledge, expertise, capabilities and client relations ... and so forth.

Most of all, discuss this with your colleagues and managers to validate your thinking, pick up additional pointers and garner their involvement, understanding and support. This is not a solo exercise! Risk, legal/compliance, IT, HR, health and safety and audit functions all have people who care and know about this stuff, so pick their brains to paint a better, more realistic landscape. Managers at different levels have differing outlooks and horizons ... and if they struggle to even understand the questions, let alone offering coherent answers, then you have another improvement opportunity in raising risk and security awareness.

If you're with me so far, here's a free bonus: once you figure what really drives the organisation (or rather, its management) in the information security realm, you also have the basis to develop an awesomely powerful suite of information risk and security metrics. Measuring 'the stuff that really matters' trumps all other approaches, in my book. If a given metric supports, enables or is required to achieve business objectives, disregarding it could be career-limiting for those who have no interest or concern in this area. At the same time, key metrics showing adverse trends are a clear call-to-action and cannot be ignored. No time to waste!

14 May 2022

Managing professional services engagements

In relation to professional services, management responsibilities are shared between client and provider, except where their interests and concerns diverge. Identifying and exploiting common interests goes beyond the commercial/financial arrangements, involving different levels and types of management:

  • Strategic management: whereas some professional services may be seen as short-term point solutions to specific issues ("temping"), many have longer-term implications such as the prospect of repeat/future business if things work out so well that the engagement is clearly productive and beneficial to both parties. Establishing semi-permanent insourcing and outsourcing arrangements can involve substantial investments and risks with strategic implications, hence senior management should be involved in considering and deciding between various options, designing and instituting the appropriate governance and management arrangements, clarifying responsibilities and accountabilities etc. Organisations usually have several professional services suppliers and/or clients. Aside from managing individual relationships, the portfolio as a whole can be managed, perhaps exploiting synergistic business opportunities (e.g. existing suppliers offering additional professional services, or serving other parts of the client organisation or its business partners).
  • Tactical and operational management: planning, conducting, monitoring and overseeing assignments within a professional services engagement obviously involves collaboration between client and provider, but may also affect and be affected by the remainder of their business activities. A simple example is the provision and direction of the people assigned to assignments, perhaps determining their priorities relative to other work obligations. If either party's management or workforce becomes overloaded or is distracted by other business, the other may need to help out and perhaps take the lead in order to meet agreed objectives - classic teamwork.
  • Commercial management: negotiating and entering into binding contracts or agreements can be a risky process. Getting the best value out of the arrangements includes not just the mechanics of invoicing and settling the bills accurately and on time, but getting the most out of all the associated resources, including the information content. 
  • Relationship management: anyone over the age of ten will surely appreciate that relationships are tough! There are just so many dimensions to this, so much complexity and dynamics. In respect of professional services, there are both organisational and personal relationships to manage, while 'manage' is more about guiding, monitoring and reacting than directing and controlling. Despite the formalities of laws, contracts and policies, relationships seemingly play by their own rules. Part of the challenge in professional services is that clients and providers must collaborate to make the relationship work, blending approaches to reach workable solutions to avoid problems and deal with any issues that crop up in practice. At the same time, clients and providers have other interests, constraints, objectives etc.
  • Risk and opportunity management: whether it's avoiding bad stuff or chasing good outcomes, uncertaintly is the crux of this, either way. There's only so much that can be determined and controlled or constrained, inevitably leaving some aspects to chance. A given professional services engagement may turn out to be a roaring success or an abject failure, and there's only so much the parties can to do to swing the balance towards the former. Within compliance-driven cultures, the emphasis is typically on enforcement through sanctions such as financial penalties ... whereas reinforcement through bonuses and profits may be at least as effective, and in my experience can be even more valuable and motivational in practice. Finding arrangements that benefit both parties and minimise issues or incidents can take professional services engagements up a gear.
  • Information risk, information security, IT/cybersecurity and privacy management: information can be the most valuable yet vulnerable asset in a typical professional services engagement - not just the information directly involved in providing the services themselves but also other/peripheral information that is shared or disclosed incidentally between the parties. For example, a cloud service provider may learn commercially-sensitive details about a client's strategic business interests (or vice versa) during discussions about current and future services. The same friendly, close, trusting relations that typically develop between fellow employees within a business department can develop between workers from separate organisations, especially if the professional service necessarily entails a high degree of trust (e.g. legal services) ... with the potential for individual/personal interests to supplant business/organisational interests. Identifying, evaluating and addressing the risks is the nub of the professional services security guideline.
  • People management: motivating, monitoring and mentoring the people involved in a professional services engagement is similar but a little different to regular management, for two key reasons. First, of course, the people are employed by distinct organisations (or departments, business units etc.), with differing business objectives, policies, concerns etc. If, say, a client has a problem with a particular consultant's competence or suitability for an assignment, the consultant's employer should be informed about it and should probably be actively involved in resolving it. Secondly, professionals like me are by nature strong-willed self-assured egocentric individuals that can be tricky to 'manage' in the traditional sense. The confidence arising from our specialist knowledge and expertise can lead to, or come across as, arrogance and stubbornness. Self-awareness and social skills can be challenging for those of us who focus too heavily on driving towards objectives.
  • Performance, quality, competence and capability management: professional services clearly depend on the providers’ competence, capabilities and suitability to provide high quality services. More subtly, clients also play a key part in professional services, for example correctly interpreting advice received and acting accordingly. Simply specifying the services required can be difficult for clients that lack the expertise and knowledge, which is often why they need those very services!
  • Change management: whereas changes are inevitable, coping with them is not. Some professional services are only effective if they achieve worthwhile changes in the client organisation, or at least prevent unwanted changes. If engagements are not effective, that changes the client-provider relationship. Conversely, effective engagements may lead to unanticipated changes, perhaps opening up further opportunities, again changing the relationship. Changes of provider and client personnel can be problematic due to the individual knowledge, competencies, motivations etc., but may also be beneficial if things weren't going so well or could be better. This is yet another area where management may be reactive, neutral or proactive, ideally adjusting to circumstances. Identifying, evaluating and responding to changes, or the potential for change, is conceptually similar to - and indeed part of - information risk management.
  • Incident management: various incidents may be caused or not prevented by a professional services provider or client, or may arise from third parties or natural events, or may involve a combination of factors. Once again, identifying, evaluating and responding to incidents is an integral part of information risk management, and 'management' is tricky. Regardless of the blame and impact costs, such incidents can harm the relationships, particularly if mis-managed. Partners in healthy, productive relationships are more likely to work things out than if there are pre-existing relationship issues, hence there are resilience aspect to this, and more to address than the mere mechanics of incident notification and resolution.
  • Compliance and non-compliance management: two distinct approaches, with two distinct sets of compliance imperatives. Professional services providers and clients must both comply with applicable laws and regulations, plus the obligations they have accepted in the contract or agreement. There are also implicit drivers, such as being trustworthy, ethical, competent and professional. Achieving and maintaining compliance involves informing and motivating the people involved, a proactive and positive style of management. Managing non-compliance, in contrast, involves putting in place the mechanisms to detect and deal with non-compliance - a negative, reactive approach. Legal action following non-compliance is generally considered a costly last resort, implying additional emphasis on proactive compliance management. Compliance management is a preventive control, worth bearing in mind for relationship management meetings, reporting etc.
  • Ethics and ethical management: behaving ethically, and being seen to do so, supports trustworthiness and engenders trust. Managers have a leadership role to play here, particularly by demonstrating ethical actions and decisions. It's all very well having corporate policies and values on ethics: actions speak louder than words. Examples: communicating early, openly and honestly; admitting fault if appropriate, and proactively 'putting things right'; under-promising and over-delivering; forgoing personal gain to maximise business value for the engagement as a whole; expecting/demanding high ethical standards of others, and perhaps rewarding them accordingly; going 'above and beyond' expectations to protect and enhance the value generation.
  • Value management: value is important for both professional services providers and their clients, obviously, but there's more to it. The perceived current and prospective value of a professional services engagement affects the organisations' and the individuals' willingness to invest in it e.g. by engaging and actively participating in the co-creation aspects, over the long term. There is a positive feedback loop here: the more valuable an engagement is or appears to be, the more value can potentially be generated through it - and vice versa in dysfunctional relationships.
"Value is created through co-creation. Both the service provider and the customer must benefit from the service if there is to be a sustainable relationship. Because of these different valuation perspectives, value is by definition multi-dimensional. Another characteristic is that value arises in the interaction and that value can only be determined afterwards. Value can manifest itself in different ways, from the use value for the customer (value in use), to social values (eg image of customer and/or provider), environmental values (eg sustainability, the ecological footprint) and relationship values (the meaning of customer and provider for each other). Many values translate into financial benefits, both for the customer and for the provider." [USM]
In respect of information risks and security, value management is important on both the upside and the downside. On the upside, a highly valuable engagement enables those involved to invest in risk management activities, such as effective controls. On the downside, the potential for loss of value arising from incidents also encourages the same investment - a rare no-lose situation! In short, provider and client are both seeking to generate real value from professional services, making it a common goal, a unifying factor, a rallying cry.

The above activities are layered on top of the formal management of professional services engagements and assignments, such as entering into contractual commitments, plus invoicing and settlement. They require soft skills and collaborative approaches, making professional services engagements more human-focused than, say, the sale and purchase of goods. There are cultural aspects too - but that's enough for now.

13 May 2022

Professional services infosec policy template


We have just completed and released a brand new information security policy template on professional services.

The policy is generic, pragmatic and yet succinct at just over 2 pages.

Professional services engagements, and hence the associated information risks, are so diverse that it made no sense to specify particular infosec controls, except a few examples. Instead, the policy requires management to nominate Information Owners for each professional services engagement, and they, in turn, are required to identify, evaluate and treat the information risks.

This is another shining example of the value of the 'information ownership' concept. Although they are encouraged to delegate responsibilities to, or at least take advice from, relevant, competent experts (e.g. in Information Risk and Security, Legal/Compliance, HR, IT, Procurement), Information Owners are held personally accountable for the protection and legitimate exploitation of 'their' information.

If Information Owners neglect to ensure that the information risks are properly treated, leading to unacceptable incidents, they may be held to account and sanctioned in some way - a personal impact of an information risk. Hopefully Information Owners will bear this in mind when seeking the advice of those relevant, competent experts about professional services engagements, when deciding how to treat the risks, and when allocating resources to the risk management and control activities, technologies, procedures etc. At least, they should do so if the policy is properly implemented with appropriate governance, management oversight, compliance monitoring and assurance ... and that once again emphasises that corporate policies form a mesh. In almost any situation, several policies may be relevant, which is fine so long as they are consistent, well-written, understood and enforced. I will pick up on that point shortly as we are about to release a couple of 'toolkits' - suites of policy templates and other materials. Watch this space!

The policy template is available here, along with the professional services security guideline and checklists.

11 May 2022

AA privacy breach --> policy update?

According to a Radio New Zealand news report today:

"Hackers have taken names, addresses, contact details and expired credit card numbers from the AA Traveller website used between 2003 and 2018. AA travel and tourism general manager Greg Leighton said the data was taken in August last year and AA Traveller found out in March. He said a lot of the data was not needed anymore, so it should have been deleted, and the breach "could have been prevented"."

The disclosure prompted the acting NZ Privacy Commissioner to opine that companies 'need a review policy':

"Acting Privacy Commisioner Liz Macpherson told Midday Report that if data was not needed it should be deleted ... Companies needed a review policy in place to determine if the data stored was neccessary, or could be deleted, Macpherson said."

So I've looked through our SecAware information security policies to see whether we have it covered already, and sure enough we do - well, sort-of. Our privacy compliance policy template says, in part:

"IT systems, cloud services and business processes must comply fully with applicable privacy laws throughout the entire development lifecycle from initial specification though testing, release, operation, management and change, to final retirement.  For example, genuine (as opposed to synthetic) personal information used during the development process (e.g. for testing) must be secured just as strongly as in production, and securely erased when no longer required."

The final clause in that paragraph refers to 'secure erasure' without specifying what that really means, and 'when no longer required' is just as vague as determining whether the data remains 'necessary'. That said, the remainder of the paragraph, and in fact the rest of the policy template, covers other relevant and equally important issues - including compliance with applicable privacy laws and regulations - such as GDPR.

Digging deeper, article 28 of GDPR requires that (in part):

"[the data processor] at the choice of the controller, deletes or returns all the personal data to the controller after the end of the provision of services relating to processing, and deletes existing copies unless Union or Member State law requires storage of the personal data".

Article 28 doesn't appear to say what the controller must do with any personal data returned by the processor [although I am NOT a lawyer!]. GDPR recital 39, however, says (in part):

"The personal data should be adequate, relevant and limited to what is necessary for the purposes for which they are processed. This requires, in particular, ensuring that the period for which the personal data are stored is limited to a strict minimum."

So, if GDPR applies, there appears to be a legal obligation to restrict the storage period of personal data to a 'strict minimum' ... and compliance with GDPR is covered by our privacy compliance policy template.

That said, I'm wondering now whether to update the SecAware policy statement above, expanding on the bold final phrase to give more explicit direction.

One approach might be to associate expiry dates with all personal data records, using periodic automated system functions or manual procedures to erase expired personal data. The expiry date might be pre-loaded when the data are originally loaded and updated as appropriate (e.g. if the service is extended or the principal re-confirms their permission to continue storing and using their personal data), and further controls might be helpful (e.g. validation checks for personal data records without valid expiry dates within a defined, reasonable period; additional pre-deletion checks that personal data that appear to have expired are truly redundant; plus various controls associated with 'secure deletion').

There are doubtless other approaches, too.

I'm not convinced, however, that it is worth elaborating on the policy in such detail, particularly as (a) the controls would be quite costly, and (b) the practical implementation details are context-dependent whereas all our policy templates are deliberately generic. I think we'll leave this to the discretion of our valued customers and their legal experts!

How many metrics?

While perusing yet another promotional, commercially-sponsored survey today, something caught my beady eye. According to the report, "On average, organizations track four to five metrics".  

Four to five [cybersecurity] metrics?!!  Really?  

Oh boy.

Given the importance, complexities and breadth of cybersecurity, how on Earth can anyone sensibly manage it with just four to five metrics? It beggars belief, particularly as the report indicates that three quarters of the 1,200 surveyed companies had at least a $billion in revenue, and more than half of them have at least 10,000 employees. With a total cybersecurity expenditure of $125billion (around 80% of the total global estimate), these were large corporations, not tiddlers.

The report indicates the corresponding survey question was "Q30. Which of the following cybersecurity metrics does your organization track, and which metrics are the most important?". Well OK, that's two questions in one, and 'the following cybersecurity metrics' are not stated.

Having been quietly contemplating that one remarkable, counter-intuitive finding for about an hour, I've thought up a bunch of potential explanations so far:

  1. The four to five cybersecurity metrics are just those considered 'key' by the CISOs and other senior people surveyed.
  2. The four to five are just the respondents' choices from the 16 metrics presumably offered in the question (we aren't told what metrics were offered in the question, but there are 16 listed in the report).
  3. Cybersecurity is not being managed sensibly.
  4. Cybersecurity is not being managed.
  5. Cybersecurity is not what I think it is - a neologism for IT security or more specifically Internet security protecting against deliberate, malicious attacks by third parties.
  6. CISOs and the like haven't got a clue what they are doing.
  7. Most CISOs and the like chose not to answer the question (of the 1,200 companies surveyed, we aren't told how many respondents answered this or indeed any other question: perhaps they were getting bored by question 30 of an unknown total).
  8. CISOs and the like simply lied, for some reason, or their responses were inaccurately/ineptly recorded.
  9. The word 'track' in the question strongly implies that the four to five metrics are measured and reported regularly, showing trends over time. Other metrics that are not 'tracked' in this way were not noted.
  10. The survey was ineptly designed, conducted, analysed and/or reported.
  11. The survey was non-scientific, biased towards the interests of the commercial sponsors (who, presumably, offer 'solutions' measured by the chosen metrics ...).
  12. The survey company is blatantly circulating misinformation, designed to mislead.
  13. I am misinterpreting the phrase. Perhaps 'On average' or 'metrics' mean something other than what I understand. 
  14. Perhaps 'four to five' is a transcription error: maybe the count was forty-five.
  15. I'm totally mistaken: it is possible to manage cybersecurity by tracking just four to five metrics. The finding is valid. I need to readjust my head.
  16. I'm seriously over-thinking this, putting far too much emphasis on those eight words taken out of context.
Of that list, while I'm happy to discount the patently ridiculous possibilities, I find it hard to choose between the remainder. I'm drawn inexorably back to something I have complained about previously here on the blog: I suspect that the report is merely another marketing exercise, not a properly designed and conducted scientific study. I find it lacks credibility and integrity, is untrustworthy, and hence is not worth any more of my time, or indeed yours - so I refuse to provide a link to the source.
404  Move along, nothing to see here.

Data masking and redaction policy


Last evening I completed and published another SecAware infosec policy template addressing ISO/IEC 27002:2022 clause 8.11 "Data masking":

"Data masking should be used in accordance with the organization’s topic-specific policy on access control and other related topic-specific, and business requirements, taking applicable legislation into consideration."

The techniques for masking or redacting highly sensitive information from electronic and physical documents may appear quite straightforward. However, experience tells us the controls are error-prone and fragile: they generally fail-insecure, meaning that sensitive information is liable to be disclosed inappropriately. That. in turn, often leads to embarrassing and costly incidents with the possibility of prosecution and penalties for the organisation at fault, along with reputational damage and brand devaluation.

The policy therefore takes a risk-based approach, outlining a range of masking and redaction controls but recommending advice from competent specialists, particularly if the risks are significant.

The $20 policy template is available here.

Being a brand new policy, it hasn't yet had the benefit of the regular reviews and updates that our more mature policies enjoy ... so, if you spot issues or improvement opportunities, please get in touch.

As usual, I have masked/redacted the remainder of the policy for this blog and on SecAware.com by making an image of just the first half page or so, about one eigth of the document by size but closer to one quarter of the policy's information value. So I'm giving you about $5's worth of information, maybe $4 since the extract is just an image rather than an editable document. On that basis, similar partial images of the 80-odd security policy templates offered through SecAware.com are worth around $320 in total. It's an investment, though, a way to demonstrate the breadth, quality, style and utility of our products and so convince potential buyers like you to invest in them. 

10 May 2022

Threat intelligence policy


I finally found the time today to complete and publish an information security policy template on threat intelligence. 

The policy supports the new control in ISO/IEC 27002:2022 clause 5.7: 

"Information relating to information security threats should be collected and analysed to produce threat intelligence."

The SecAware policy template goes a little further: rather than merely collecting and analysing threat intelligence, the organisation should ideally respond to threats - for example, avoiding or mitigating them. That, in turn, emphasises the value of 'actionable intelligence', in the same way that 'actionable security metrics' are worth more than 'coffee table'/'nice to know' metrics that are of no practical use. The point is that information quality is more important that its volume. This is an information integrity issue, as much as information availability.

The policy also mentions 'current and emerging threats'. This is a very tricky area because novel threats are generally obscure and often deliberately concealed in order to catch out the unwary. Maintaining vigilance for the early signs of new threat actors and attack methods is something that distinguishes competent, switched-on security analysts from, say, journalists.

The policy template costs just $20 from www.SecAware.com. I'll be slaving away on other new policies this week, plugging a few remaining gaps in our policy suite - and I'll probably blog about that in due course.