Welcome to NBlog, the NoticeBored blog

I may meander but I'm exploring, not lost

Apr 25, 2008

USB security risk self-assessment

City of London Police officers thinking of transferring information on USB memory sticks can self-assess the risks using a questionnaire. It's a simple idea really: a police officer's responses to a few questions determine the 'risk score' leading to approval (or rather a requirement to seek approval from the relevent level of management authority, and/or to use USB sticks with additional security controls) or disapproval of the use of a USB stick for the intended situation.

Being self-assessment, the system depends on users answering appropriately and is open to deliberate abuse and inadvertent errors. However, this risk is offset to some extent by compliance procedures and structures in the police. Furthermore, it's better than nothing - without the system, police officers presumably make such decisions on a more arbitrary basis, assuming they even consider the security risks. The tool at least raises security awareness (assuming the tool is suitably promoted, for instance by being embedded in standard operating procedures).

Automating USB risk assessment is interesting at another level too. The decision tree in this instance is relatively simple, much simpler than with many other information security risks yet still complex enough to benefit from being presented as a structured questionnaire. The assessment output is based simply on the net total of scores from each question, and has only a few possible recommendations. Someone has had to write the questions and determine the score ranges and recommendations, somehow. The assessment could give inappropriate responses under certain circumstances since it does not take account of all possible situations (e.g. whether the police officer has lost numerous USB sticks before).

Contrast this to, for example, the assessment of security risks relating to a software application. There are so many elements to the risk and so many potential outputs that it is infeasible to automate the assessment - or is it? Some sort of artificial intelligence/knowledge based system is possible and could arguably give better answers than either of the two usual alternatives: asking an information security person to assess the risks or skipping the assessment altogether.

Now that would make an interesting research project for someone.