Welcome to the SecAware blog

I spy with my beady eye ...

26 Jul 2022

Half-a-dozen learning points from a '27001 certification announcement

This morning I bumped into a marketing/promotional piece announcing PageProof’s certified "compliance" (conformity!) with "ISO 27001" (ISO/IEC 27001!). Naturally, they take the opportunity to mention that information security is an integral part of their products. The promo contrasts SOC2 against '27001 certification, explaining why they chose ‘27001 to gain some specific advantages such as GDPR compliance - and fair enough. In the US, compliance is A Big Thing. I get that.

It occurs to me, though, that there are other, broader advantages to ‘27001 which the promo could also have mentioned, further valuable benefits of their newly-certified ISMS.

I spot at least six general learning points here for organisations currently implementing ISO/IEC 27001:

  1. Elaborating on the broad business benefits of ‘27001 can be a creative and valuable activity in its own right. A well-designed and effective ISMS can achieve way more than protecting the confidentiality, integrity and availability of data, or satisfying GDPR and other compliance obligations. Although PageProof hints at some, it’s unclear whether they truly appreciate its full potential but chose not to mention them in this promo.

  2. The eventual marketing/promotional value of ‘27001 certification is worth thinking-through. From the audience's perspective i.e. the organisation’s third party stakeholders (particularly customers and prospects, plus partners, owners, regulators and other authorities), what worthwhile differences can they expect as a result of the certification? What are the main points that will truly resonate? How will successful certification be promoted, and how will it change the organisation’s ongoing marketing, promotional and advertising activities - plus its operations (in order to satisfy if not exceed the market's expectations)? Rhetorical questions such as these may be raised and discussed at any point, ideally starting early-on in the ISMS design and implementation project, and gradually refined in the run-up to certification.

  3. Likewise, what about the internal corporate stakeholders - the managers, staff, contractors, consultants, interns etc.: how will the ISMS implementation project affect the workforce? What changes can they expect? What practical differences will the ISMS make? How can they get involved and help the process along (or at least avoid inadvertenly causing problems)? What are the key messages to be put across through internal communications at all stages of the project?

  4. Combining points 1-3 can help clarify the objectives of the ISMS - not just the detailed information risk and security objectives but more generally the business objectives, the rationale for doing all this stuff. What are the anticipated payoffs? Which of those benefits would be in, say, the top five?

  5. Those clear objectives, in turn, suggest some obvious metrics to drive their achievement. For example, if 'reducing compliance costs' is one of five key objectives of '27001 certification, there are various ways to measure and control those costs: what would be the ultimate compliance-cost-related metric to track, report and optimise? Congratulations: you've just identified an important security metric - a Key Performance Indicator if you prefer!

  6. Certification marks the end of ISMS implementation and the start of routine operations. Once operational, certified and announced to the world, the ISMS should continue adding value, the very reason for its existence. Will any of the business objectives, and hence the metrics and KPIs, change markedly at that point, or will they evolve gradually through business-as-usual? Are there any implications worth taking into account in the ISMS design - for instance, ensuring that the 'security dashboard' can be updated to show new metrics or present KPIs differently? What about 'instrumenting' various security processes and systems now, generating the raw data for historical analysis even though that may not be needed until later on?

All in all, six stimulating points drawn from a quick read of a promo. Thanks for the inspiration, PageProof, and congrats on your certification.

Oh and there's a free bonus. Point 7: we can all learn stuff from those who go before us. Find out and think about other organisations' approach to information risk, security, privacy, compliance, governance, metrics, incidents, marketing, whatever. Would their strategies be applicable to our organisations? What would we do differently? Could we do even better?


POSTSCRIPT: if your organisation is so spooky or shy that it wouldn't even consider a press release or displaying its shiny new certificates on a website, you might instead think forward to how certification would be announced internally, conformed to management at least. The things that are important enough to state then are [some of] the key objectives for the ISMS, worth bearing in mind now.

If it hadn't even occurred to you to promote your eventual certification, I'd have to wonder about your management's understanding and commitment to this initiative, and question your motivation for getting into it. Why bother? What will it achieve, for the business - seriously, what? Isn't that worth celebrating, once achieved?

25 Jul 2022

Resilience is ...


... depending on others and being there for them when they need us most

 ... the rod bending alarmingly ... while landing a whopper

... an oak tree growing roots against the prevailing wind

... taking the punches, reeling but not out for the count

... demonstrating, time after time, personal integrity

... willingness to seize opportunities, taking chances

... coping with social distancing, masks and all that

... accumulating reserves for the bad times ahead

... the bloody-minded determination to press on

... disregarding trivia, focusing on what matters

... a society for whom this piece resonates

... deep resolve founded on inner strength

... knowing it'll work out alright in the end

... a word, a rich concept, a way of life

... knowing when and how to concede

... more than 'putting on a brave face'

... a prerequisite for ultimate success

... facing up to adversity: bring it on

... self-belief and trust in the team

 ... taking the knocks and learning

... communities pulling together

... being prepared for the worst

... standing out from the crowd

... being fit enough to survive

... pressing ahead, regardless

... standing up to be counted

... disproving the naysayers

... finding creative solutions

... having fallback options

... keeping on keeping on

... wiping away the tears

... always bouncing back

... built layer-upon-layer

... thriving on adversity

... having what it takes

... steadfast insistence

... picking your fights

... sheer doggedness

... an admirable trait

... justified optimism

... retaining options

... quiet confidence

... daring to differ

... plugging away

... core strength

... beyond hope

 ... getting even

... rerouting ...

... suppleness

... true grit

... faith

... us

...

 

 

I'm blogging about other infosec terms weekly:

24 Jul 2022

Risk management trumps checklist security

While arguably better than nothing at all, an unstructured approach to the management of information security results in organisations adopting a jumble, a mixed bag of controls with no clear focus or priorities and – often – glaring holes in the arrangements. The lack of structure indicates the absence of genuine management understanding, commitment and support that is necessary to give information risk and security due attention - and sufficient resourcing - throughout the business. 
 
It's hard to imagine anyone considering such a crude, messy approach adequate, even those who coyly admit to using it!  I'm not even sure it qualifies as 'an approach'.
 
Anyway, the next rung up the ladder sees the adoption of a checklist approach: essentially, someone says 'Just adopt these N controls and you'll be secure'! It may be true that some information security controls are more-or-less universal, so any organisation that does not have them all might be missing out. Maybe it is a step up from the previous approach, and yet there are significant issues with checklists that tend to be:
  • Basic, severely over-simplifying a complex and dynamic problem, ignoring numerous aspects while focusing attention on the N (meaning a handful);
  • Generic but not necessarily as universal as implied, given the wide diversity of organisations out there in terms of size, maturity, industry, culture, history, business objectives, resources and so on;
  • The 'lowest common denominator', setting a (very) low bar;
  • Sequenced linearly in a way that implies priorities for implementation and generally disregards dependencies and linkages between items on the list, yet another over-simplification; 
  • Just someone's arbitrary selection, generally without any sound basis for selecting the listed controls and not others, other than the origantor's alleged expertise;
  • Tricky to interpret and apply in a given situation, given the immaturity of the organisations attracted to checklist approaches;
  • Not sufficient in most cases, and often biased towards particular types of control e.g. 'cyber' or 'compliance';
  • Unrealistic in the presumption that simply because someone recommends the N controls, managers will therefore naively accept that they are both required and valuable;
  • Belittling, clearly implying that they are deliberately dumbed-down because the intended audience is, well, dumb.
If N controls are inadequate or even barely sufficient, it is tempting to expand the list. N-plus control checklists suffer similar problems, plus:
  • The more controls that are added to the list, the less likely they are to be truly universal;
  • The controls tend to be grouped and structured in some fashion ... which is another ill-defined process involving arbitrary criteria and of dubious value;
  • The longer the list, the less attractive it becomes to those seeking easy solutions. Simply reading longer lists takes time and becomes tedious, while implementing all the controls appears increasingly arduous - especially if the recommended controls are not explained and justified properly (which would make the list even longer anyway!);
  • Ultimately, a long list is no better than a bit of Googling, consideration and discussion;
  • There is a risk that the very readers who would benefit to some extent from the approach are overwhelmed by it all and put off entirely.
At face value, ISO/IEC 27001 is an N-plus checklist, after all Annex A is an arbitrarily structured list of about 100 information security controls recommended by a committee of experts. There's more to it than that, however.

For one thing, Annex A is merely a succinct summary of the information security controls in ISO/IEC 27002. It's a simple list, yes, but one backed by roughly a page of detailed explanation behind each control, particularly in the latest, fully updated edition published a few months back.

Furthermore, the main body of ISO/IEC 27001 avoids the checklist mentality by defining the governance arrangements for managing an organisation’s information risks. It is a systematic, iterative, risk-driven approach, simple and easy to grasp. In a nutshell, there are just 4 phases:


An ISO27k ISMS enables and supports the analysis, prioritisation and changes required for any organisation to design and implement a sound, systematic approach to information risk and security management that is tailored and appropriate to its specific situation - a substantial enhancement over any crude checklist. The selection and implementation of information security controls is context-dependent in ISO27k, particularly as the standard allows organisations to choose controls from any source - including those crude checklists, and Google, and ... whatever. Any list of controls is less important than the process for identifying, evaluating and treating risks
 
The risk management process at the heart of ISO/IEC 27001 is universally applicable. It's not even limited to information risks, information security, privacy and so forth.
 
Organisations may wish to go beyond the fairly basic risk and security management arrangements mandated in the main body of ISO/IEC 27001 … but that is not necessarily a good idea, especially at the start of their journey to maturity. ‘Keep it simple’ makes more sense as a strategy, especially in a start-up situation i.e. set out to implement just the basics, get them working properly and plan to build gradually from there, over time, making incremental improvements where justified or necessary. The ISMS itself embodies the mechanisms to capture requirements and push through improvements, systematically (see the K in the RISK mnemonic).

A reasonably mature organisation is likely to have a suite of reasonably mature management and governance arrangements already operating. A new ISO27k ISMS will be slotting in to and making use of them, requiring changes in various aspects to accommodate the structured, systematic, risk-driven ISO27k approach. The ISMS itself is likely to involve consolidating existing practices around information and cyber security management, albeit often mostly within IT but hopefully with strong lateral links to related functions such as Risk, Compliance, Procurement, HR, Operations and Facilities, perhaps even upwards to senior/Executive Management and the Board. Even here, the keep-it-simple ISMS strategy has value in terms of focusing on the essential/core processes and activities around information risk management. Anything above and beyond the core should probably be a lesser priority during the initial ISMS implementation phase, unless there are good business reasons to press ahead more urgently in other areas (e.g. compliance obligations) – in which case those pressures can help drive through the ISMS implementation.

22 Jul 2022

Security in software development


Prompted by some valuable customer feedback earlier this week, I've been thinking about how best to update the SecAware policy template on software/systems development. The customer is apparently seeking guidance on integrating infosec into the development process, which begs the question "Which development process?". These days, we're spoilt for choice with quite a variety of methods and approaches. 

Reducing the problem to its fundamentals, there is a desire to end up with software/systems that are 'adequately secure', meaning no unacceptable information risks remain. That implies having systematically identified and evaluated the information risks at some earlier point, and treated them appropriately - but how?

The traditional waterfall development method works sequentially from business analysis and requirements definition, through design and development, to testing and release - often many months later. Systems security ought to be an integral part of the requirements up-front, and I appreciate from experience just how hard it is to retro-fit security into a waterfall project that has been runnning for more than a few days or weeks without security involvement.

A significant issue with waterfall is that things can change substantially in the course of development: the organisation hopefully ends up with the system it originally planned, but that may no longer be the system it needs. If the planned security controls turn out to be inadequate in practice, too bad: the next release or version may be months or years away, if ever (assuming the same waterfall approach is used for maintenance, which is not necessarily so*). The quality of the security specification and design (which drives the security design, development and testing) depends on the identification and evaluation of information risks in advance, predicting threats, vulnerabilities and impacts likely to be of concern at the point of delivery some time hence.

In contrast, lean, agile or rapid application development methods cycle through smaller iterations more quickly, presenting more opportunities to update security ... but also more chances to break security due to the hectic pace of change. A key problem is to keep everyone focused on security throughout the process, ensuring that whatever else is going on, sufficient attention is paid to the security aspects. Rapid decision-making is part of the challenge here. It's not just the method that needs to be agile!

DevOps and scrum approaches use feedback from users on each mini-release to inform the ongoing development. Hopefully security is part of that feedback loop so that it improves incrementally at the same time, but 'hopefully' is a massive clue: if users and managers are not sufficiently security-aware to push for improvements or resist degradation, and if the development team is busy on other aspects, security can just as readily degrade incrementally as other changes take priority. 

Another issue is that security testing has to suit short process cycles, with a tendency towards quick/superficial tests and less opportunity for the thorough, in-depth testing needed to dig out troublesome little security issues lurking deep within. Personally, I would be very uncomfortable developing a cryptographic application too quickly, or for that matter anything business- or safety-critical.

So, there are some common factors there, regardless of the method:

  • The chosen development methods have risk and security implications;
  • Various dynamics are challenging, on top of the usual security concerns over complexity, and changes present both risks and opportunities;
  • Security is just one of several competing priorities, hence there is a need for sufficient, suitable resources to keep it moving along at the right pace;
  • Progress is critically reliant on the security awareness and capabilities of those involved i.e. the users, designers, developers, testers, project/team leaders and managers.
* Just one of those dynamics is that the processes may change in the course of development: a system initially developed and released through a classical waterfall project may be maintained by something resembling the rapid, iterative approaches. The cycle speed for iterations is likely to slow down as the system matures or resources are tight, or conversely speed up to react to an increased need for change from the business or technology. 
 
So, overall, it makes sense for a software/system development security policy to cover:
  • An engineering mindset, prioritising the work according to the organisation's information risks ('risk-first development'?), with a willingness to settle for 'adequate' (meaning fit-for-purpose) security rather than striving in vain for perfection;
  • Flexibility of approach - supporting/enabling whatever processes are in use at the time, integrating security with other aspects and collaborating with colleagues where possible;
  • Sufficient resourcing for the information risk and security tasks, justified according to their anticipated value (with implications for metrics, monitoring and reporting);
  • Monitoring and dynamically responding to changes, being driven by or driving priorities according to circumstances, seizing opportunities to improve security and resisting retrograde moves in order to ratchet-up security towards adequacy. 
The policy could get into general areas such as accountability (e.g. various process checkpoints with management authorisation/approval), and delve deeper into security architecture (to reduce design flaws), secure coding (to reduce bugs) and security testing (to find the remaining flaws and bugs), plus security functions (such as backups and user admin) ... but rather than bloat the SecAware policy template, we choose to leave the details to other policies and procedures. Customers are welcome to modify/supplement the template as they wish. 
 
Whether that suits the market remains to be seen. What do you think? Do your security policies cover software/system development? If so, do they at least address the issues I've noted? If not, $20 is a wise investment ...

21 Jul 2022

ISO management systems assurance

In the context of the ISO management systems standards, the internal audit process and accredited certification systems as a whole, are assurance controls primarily intended to confirm that organisations' management systems conform to the explicit requirements formally expressed in the respective ISO standards.

A conformant management system, in turn, is expected to manage (design, direct, control, monitor, maintain …) something: for ISO/IEC 27001, that 'something-being-managed' is the suite of information security controls and other means of addressing the organisation’s information risks (called 'information security risks' or 'cybersecurity risks' in the standards). For ISO 9001, it is the quality assurance activities designed to ensure that the organisation's products (goods and services) are fit for purpose. For ISO 14001, it is the controls and activities necessary to minimise environmental damage.

My point is that the somethings-being-managed are conceptually distinct from the  'management systems' through which managers exert their direction and control. This is a fundamental part of the ISO management systems approach, allowing ISO to specify systems required to manage a wide variety of somethings in a similar way - a governance approach in fact.

Management system certification auditors, whose sole purpose is to audit clients' management systems' conformity with the requirements expressed in the standards, have only a passing interest in those somethings-being-managed, essentially checking that they are indeed being actively managed through the management system, thereby proving that the management system is in fact operational and not just a nice neat set of policies and procedures on paper.

Management system internal auditors, in contrast, may be given a wider brief by management which may include probing further into the somethings being managed ... but that’s down to management’s decision about the scope and purpose of the internal audits, not a formal requirement of the standards. Management may just as easily decide to have the internal auditors stick to the management system standard conformity aspects, just the same as the certification auditors.

Likewise with management reviews of the management systems: the ISO standards stop well short of specifying all the things management might conceivably want to be reviewed. Reviewing conformity with the respective ISO management systems standards is just one of several possible review objectives, alongside all the things hopefully being measured through the management system metrics.

18 Jul 2022

Skyscraper of cards


Having put it off for far too long, I'm belatedly trying to catch up with some standards work in the area of Root of Trust, which for me meant starting with the basics, studying simple introductory articles about RoT.

As far as I can tell so far, RoT is a concept -  the logical basis, the foundation on which secure IT systems are built.

'Secure IT systems' covers a huge range. At the high end are those used for national security and defence purposes, plus safety- and business-critical systems facing enormous risks (substantial threats and impacts). At the low end are systems where the threats are mostly accidental and the impacts negligible - perhaps mildly annoying. Not being able to tell precisely how many steps you've taken today, or being unable to read this blog, is hardly going to stop the Earth spinning on its axis. In fact' mildly' may be overstating it.

'Systems' may be servers, desktops, portables and wearables, plus IoT things and all manner of embedded devices - such as the computers in any modern car or plane controlling the engine, fuel, comms, passenger entertainment, navigation and more, or the smart controller for a pacemaker

Trust me, you don't want your emotionally disturbed ex-partner gaining anonymous remote control of your brakes, altimeter or pacemaker.

In  terms of the layers, we the people using IT are tottering precariously on the top of a house of cards. We interact with application software, interacting with the operating system and, via drivers and microcode, the underlying hardware. A 'secure system' is a load of software running on a bunch of hardware, where the software has been designed to distrust the users and administrators, other software and the hardware, all the way down to, typically, a Hardware Security Module, Trusted Platform Module or similar dedicated security device, subsystem or chip. Ironically in relation to RoT, distrust is the default, particularly for the lower layers unless/until they have been authenticated - but there's the rub: towards the bottom of the stack, how can low-level software be sure it is interacting with and authenticating the anticipated security hardware if all it can do is send and receive signals or messages? Likewise, how can the module be sure it is interacting with the appropriate low-level software? What prevents a naughty bit of software acting as a middleman between the two, faking the expected commands and manipulating the responses in order to subvert the authentication controls? What prevents a nerdy hacker connecting logic and scope probes to the module's ports in order to monitor and maybe inject signals - or just noise to see how well the system copes? How about a well-appointed team of crooks faking a bank ATM's crypto-module, or a cluster of spooks figuring out the nuclear missile abort codes?

Physically securing the hardware is a start, such that if someone tries to - say - open ('decapsulate') the TPM chip to analyse the silicon wafer under an electron microscope in the hope of finding some secret key coded within, the chip somehow destroys itself in the process - perhaps also the warhead for good measure. 

Other hardware/electronic controls can make it virtually impossible for hardware hackers to mount side-channel attacks, painstakingly monitoring and manipulating the module's power supply and ambient temperature in an attempt to reveal its inner secrets.

Cryptography is the primary control, coupled with appropriate use of authentication and encryption processes in both hardware and software (e.g.'microcode' physically built-in to the TPM chip's crypto-processor), plus other inscrutable controls (e.g. rate-limiting brute force attacks and, ultimately again, sacrificing itself, taking its secrets with it).

Developing, producing and testing secure systems is tough, even with access to low-level debugging mechanisms such as JTAG ports and insider-knowledge about the design. There must be a temptation to install hard-coded backdoors (cheat codes), despite the possibility of 'some idiot' further down the line failing to disable them before products start shipping. There is surely a fascination with attempting to locate and open the backdoors without tripping the tripwires that spring open the trapdoors to oblivion.

OK, so now imagine all of that in relation to cloud computing, where 'the system' is not just a physical computer but a fairly loose and dynamic assembly of virtual systems running on servers who-knows-where under the control of who-know-who sharing the global Internet who-knows-how. 

Having added several extra floors to our house of cards, what could possibly go wrong? 

That's what ISO/IEC 27070:2021 addresses. 

At least, I think so. My head hurts. I may be coming down with vertigo.

10 Jul 2022

Complexity, simplified

Following its exit from the EU, the UK is having to pick up on various important matters that were previously covered by EU laws and regulations. One such issue is to be addressed through a new law on online safety.

"Online safety: what's that?" I hear you ask.  "Thank you for asking, lady in the blue top! I shall elaborate ... errrr ..."

'Online safety' sounds vaguely on-topic for us and our clients, so having tripped over a mention of this, I went Googling for more information. 

First stop: the latest amended version of the Online Safety Bill. It is written in extreme legalese, peppered with strange terms defined in excruciating detail, and littered with internal and external cross-references, hardly any of which are hyperlinked e.g.

Having somewhat more attractive things to do on a Sunday than study the bill, a quick skim was barely enough to pick up the general thrust. It appears to relate to social media and search engines serving up distasteful, antisocial, harmful and plain dangerous content, including ("but not limited to") porn, racist, sexist and terrorist materials. Explaining that previous sentence in the formal language more becoming of law evidently takes 230 pages, of the order of 100,000 words.

Luckily for us ordinary mortals, there are also explanatory notes - a brief, high-level summary of the bill, explaining what it is all about, succinctly and yet eloquently expressed in plain English with pictures (not). The explanatory notes are a mere 126 pages long, half the length of the original with another 40-odd thousand words. 

Simply explaining the explanatory notes takes half a page for starters:

 

So, the third bullet suggests that we read the 126 pages of notes PLUS the 230 page bill. My Sunday is definitely under threat. At this point, I'm glad I'm not an MP, nor a lawyer or judge, nor a manager of any of the organisations this bill seems likely to impact once enacted. I'm not even clear which organisations that might be. Defining the applicabilty of the law - including explicit exclusions to cater for legitmate journalism and free-speech - takes a fair proportion of those 346 pages.

Despite not clearly expressing the risk, the bill specifies mitigating controls - well, sort of. In part it specifies that OFCOM is responsible for drawing up relevant guidance that will, in turn, specify control requirements on applicable organisations (to be listed and categorised on an official register, naturally), with the backing of the law including penalties. Since drafting, promoting and enforcing the guidance is likely to be costly, the bill even allows for OFCOM to pass (some of) its costs on to the regulated organisations, who will, in turn, pass them on to users. A veritable cost-cascade.

As to the actual controls, well the bill takes a classical risk-management approach involving impact assessments and responses such as taking down unsafe content and banning users who published it. There are arrangements for users to report unsafe content to service providers, plus automated content-scanning technologies, setting the incident management process in motion.

The overall governance structure looks roughly like this:

No wonder it takes >100,000 words to specify that little lot in law ... but, hey, maybe my diagram will save a thousand, a few dozen anyway.

You're welcome.

The reason I'm blabbering on about this here is that I'm still quietly mulling-over a client's casual but insightful comment on Thursday. 

"I was wondering whether [the information security policies we have been customising for them] might be a little too in depth for our little start-up.

Fair comment! Infosec is quite involved and - as you'll surely appreciate from this very blog - I tend to focus and elaborate on the complexities, writing profusely on topics that I enjoy. I find it quite hard to explain stuff simply and clearly without first delving deep, particularly if the end product doesn't suit my own reading preferences.

Looking at the policies already prepared, I had cut down our policy templates from about 3 or 4 pages each to about 2, adjusting the wording to reflect the client's business, technology and people, and removing bits that were irrelevant or unhelpful in the context of a small tech business. But, yes, I could see how they might be considered in-depth, especially since, even after combining a few, there were 19 policies in the suite covering all the topics necessary.

So, I responded to the client's point by preparing a custom set of Acceptable User Policies to supplement the more traditional topic-based policies already prepared. I set out with our AUP templates - single-sided A4 leaflets in (for me!) a succinct style - laying out the organisation's rules for acceptable and unacceptable behaviours in topic areas such as malware, cloud and IoT. The writing style is direct and action-oriented, straight down-to-business. 

Modifying the AUP templates for the client involved trivial changes such as incorporating their company name in place of 'the organisation', and swapping-out the SecAware logo for theirs. A little trimming and adaptation of the bullet points to fit into half a side per topic took a bit more time but, overall, starting with our templates was much quicker and easier than designing and preparing the AUPs from scratch.

I took the opportunity to incorporate some eye-catching yet relevant images to break up the text and lead the reader from topic-to-topic in a natural flow.

I merged the AUP templates into one consolidated document for ease of use, and prepared additional AUPs on areas that weren't originally covered (security of email/electronic messaging and social media), ending up with a neat product that sums things up nicely in 11 topic areas. It can be colour printed double-sided on just 3 sheets of glossy A4 paper to circulate to everyone (including joiners), or published on the corporate network for use on regular desktop PCs, laptops or tablets.

So far, so good ... but then it occurred to me yesterday that if the AUPs are to be readily available and accessible by all, the client could do with a 'mobile' version for workers' smartphones. Figuring out the page size, margins and formatting for mobiles, and further simplifying/trimming the content to suit small, narrow smartphone screens with very limited navigation took me another hour or two, ending up with a handy little document that looks professional, is engaging and reads well, makes sense and provides useful guidance on important information security matters. Reeeeesult!


In recognition of the client's valuable suggestion that sparked this, we won't be charging them for the AUP work - it's a bonus. The client gets a nice set of policies well suited to their business and people, while we have new products gracing the virtual shelves of our online store, a win-win. Happy days.

A bargain at just $20!

Now, about that Online Safety Bill: would anyone like to commission a glossy leaflet version in plain English, complete with pretty pictures?