Welcome to the SecAware blog

I spy with my beady eye ...

29 May 2022

Algo-rhythmic infosec

An article by the 50-year-old University of York Department of Computer Science outlines algorithmic approaches in Artificial Intelligence. Here are the highlights:

  • Linear sequence: progresses directly through a series of tasks/statements, one after the other.
  • Conditional: decides between courses of action according to the conditions set (e.g. if X is 10 then do Y, otherwise do Z).
  • Loop: sequential statements are repeated. Sequential statements are repeated.
  • Brute force: tries approaches systematically, blocking off dead ends to leave only viable routes to get closer to a solution.
  • Recursive: apply the learning from a series of small episodes to larger problems of the same type.
  • Backtracking: incrementally builds a data set of all possible solutions, retracing or undoing/reversing its last step if unsuccessful in order to pursue other pathways until a satisfactory result is reached.
  • Greedy: quickly goes to the most obvious solution (low-hanging fruit) and stops.
  • Dynamic programming: outcomes of prior runs (solved sub-problems) inform new approaches.
  • Divide and conquer: divides the problem into smaller parts, then consolidates the solutions into an overall result.
  • Supervised learning: programmers train the system using structured data, indicating the correct answers. The system learns to recognise patterns and hence deduce the correct results itself when fed new data.
  • Unsupervised learning: the system is fed unlabeled (‘raw’) input data that it autonomously mines for rules, detecting patterns, summarising and grouping data points to describe the data set and offer meaningful insights to users, even if the humans don’t know what they’re looking for.
  • Reinforcement learning: the system learns from its interactions with the environment, utilising these observations to take actions that either maximise the reward or minimise the risk.

Aside from computerised AI, we humans use similar approaches naturally, for instance when developing and implementing information security policies:

  • Linear sequence: start with some sort of list of desireable policies, sorted in some manner, working down from top to the bottom.
  • Conditional: after a policy is completed, decide which one to draft next according to the organisation's priorities or hot topics at that point.
  • Loop: standardise (and perhaps document) the process for developing policies, using it repeatedly and systematically for each new one.
  • Brute force: discover by trial-and-error which policy development approaches work best, avoiding the least effective ones.
  • Recursive: start by preparing relative simple, straightforward policies, stabilising and refining the process and gradually progressing, building up to more complex, difficult policies.
  • Backtracking: proactively review the policy development process after each policy or batch is completed, identifying and applying any learning points to the next policy or batch, if necessary starting over.
  • Greedy: just get on with it!  Generate, plagiarise or plain steal some rough and ready basic policies and move on, as soon as possible.
  • Dynamic programming: review the current suite of policies to distinguish the good, the bad and the ugly, refining the plans and approaches for developing further policies accordingly.
  • Divide and conquer: carve-up the policy landscape among multiple people or functions, tasking them to prepare their parts of the whole.
  • Supervised learning: analyse published policies for useful clues about how to develop good policies.
  • Unsupervised learning: simply start developing policies and let the process evolve and mature naturally over time.
  • Reinforcement learning: proactively measure or solicit feedback from various stakeholders about the quality and effectiveness of the policy suite in order to improve the approach, perhaps with periodic reviews/audits to capture learning points and identify improvement opportunities.

There are yet other possible approaches, hinting perhaps at further AI algorithms:

  • Reconsider the fundamental issues: despite being commonplace, consider whether 'policies' are, in fact, the best way of mandating information security and related rules within the organisation. Explore and reconsider the underlying objective/s, perhaps searching for alternative or complementary approaches such as procedures, guidelines, training, oversight, supervision, guidance, mentoring, technical standards and controls, implicit/explicit trust etc.
  • Bend the rules: allow individual business units, departments or teams to go their own way, modifying corporate policies to some extent, watching closely to keep them broadly in line and ideally spot approaches worth adopting more widely.
  • Break the rules: deliberately adopt radically different approaches to policy development, novel styles or formats for the policies etc., perhaps in a safe situation such as a policy wiki pilot study in single business unit.
  • Leave it to the experts: don't even bother trying to learn, either commission a policy expert to develop, or simply purchase, information security policies from someone who has already figured it out.
At some point in the not-too-distant future, the AI robots will be more than capable of developing policies for us lowly humans. Meanwhile we have the opportunity to figure out better - more effective, more efficient - ways of doing this stuff for ourselves with a bit of lateral thinking about the process, as opposed to mindlessly doing whatever we normally do, as we've learnt, or as suggested by some random bloke in a blog piece. 
I hope I have inspired you to think about how you go about routine activities, using information security policy development simply as an illustrative example. As you drop back into the humdrum rhythm of your routine daily tasks, allow yourself the odd moment's quiet reflection to consider alternative approaches and look for learning points. Just as the AI robots are learning from us, we can learn from them.

No comments:

Post a Comment