It is hard to authenticate someone's claimed identity:
- Consistently and reliably to the same criteria at all times;
- Strongly, or rather to a required level of confidence;
- Cheaply, considering the entire lifecycle of the controls including their development, use and management;
- Practically, pragmatically, feasibly, in reality;
- On all appropriate platforms/systems/devices (current, legacy and future) and networks with differing levels of trustworthiness and processing capabilities;
- Under all circumstances, including crises or emergencies;
- For all relevant people (insiders, outsiders and inbetweenies), regardless of their mental and physical abilities/capacities, other priorities, concerns, state of health etc., while also failing to authenticate former employees, twins (evil or benign), fraudsters, haXXors, kids, competitors, crims, spooks, spies, pentesters and auditors on assignment;
- Using currently viable technologies, methods, approaches and processes; and
- Without relying on unproven, unverifiable or otherwise dubious technologies.
In short, authenticating people is tough, one of those situations where we're squeezing a half-inflated balloon, hoping it won't bulge alarmingly or just pop.
In practice, when designing and configuring authentication subsystems or functions, the key question is what to compromise on, how much slack can realistically and safely be cut (i.e. reducing various information risks to an acceptable level), and just how far things need to be pushed (an assurance issue).
In the ongoing hunt for solutions, quite a variety of authentication methods, tools and techniques has been invented and deployed so far:
- Vouching ("Jim's OK, I trust Jim and you trust me, right?");
- Credentials such as business cards, driving licenses, passports, photo IDs, badges, uniforms, sign-marked vehicles, logos ...;
- Secret passwords;
- Complex passwords, enforcing rules such as mixed case, punctuation etc.;
- System-generated passwords;
- System-generated passwords in forms or styles that are intended to be more mem-or-able;
- Multiple passwords;
- Multi-part passwords, the parts held by different people;
- Long passwords or pass phrases;
- Passwords that expire and need to be replaced periodically;
- Passwords that are generated by cryptographic things and expire in a minute or so;
- Pictorial passwords - picking out specific images from several presented;
- Digital certificates with PKI on crypto things (digital keys, smart cards, desktops, laptops, smartphones ...);
- Biometrics based on:
- Fingerprint, palmprint;
- Visage/facial recognition;
- Iris or retinal pattern;
- Voice recognition;
- Typing characteristics;
- Distinctive chemicals (smell) and other bodily or behavioural characteristics such as color, mannerisms, gait (very widely used by animals other than humans);
- DNA (quite reliable but hardly instantaneous!);
- User and/or device location;
- Network address, hardware address
- Mode/means/route/mechanism of access;
- Time of access;
- Multifactor authentication using more than one 'factor';
- Probably other stuff I've forgotten about;
- Some combination of the above.
Depending on how you count them, there are easily more than 20 authentication methods in use today, and yet it is generally agreed that they barely suffice.
Rather than inventing yet another method, I wonder if we need a different paradigm, a better, smarter approach to authentication? Specifically, I'm thinking about the possibility of continuous, ongoing or dynamic authentication rather than episodic authentication.
Instead of forcing us to "log in" at the start of a session, how about simply letting us start doing stuff, rating us as we go and deciding what stuff to let us do according to how authentic we appear to be, and what it is that we want to do? So, returning to my earlier point about having to make compromises, the assurance needed before allowing someone to browse the Web is rather different to that needed to let them bank online - and within online banking, viewing account balances is not equivalent to making a funds transfer between accounts, or a payment to another account, in Switzerland, of the entire balance and credit/overdraft value, at 3:30am, from a smartphone somewhere in Lagos ...
Biometric authentication methods have to allow for natural variation between measurements because living organisms vary, and measurement methods are to some extent imprecise. Taking additional measurements is an obvious way to improve accuracy and precision ... so instead of taking a single fingerprint reading, why not keep on re-reading and checking until there is sufficient data and sufficient statistical confidence? Instead of forcing me to use a password of N-characters, why not check how I type the first few characters to see if the little timing and pressure differences indicate it is probably me, perhaps coupling that with facial recognition and additional checks depending on what it is that I'm doing during the session. If I'm doing something out of character, especially something risky, prevent or slow me down. Instead of timing out and locking me out of the system if I wander away to make a cup of tea, reduce my trustworthiness rating and hence the things I can do when I return. Let me boost my trustworthiness if I really need additional rights 'right now' by inviting me to use some of those slower and more costly authentication mechanisms, or correlating authentication/trustworthiness indicators and scores from several systems (e.g. make it harder for me to access the file server if I have not clocked-in to the building with my staff pass card, bought a coffee without sugar from the vending machine, and polled the local cell tower from my cellphone).
Maybe even turn the problem on its head. Rather than making me prove my claimed identity, disprove it by checking what I'm doing for anomalies and concerns. I'm sure there's huge potential in behavioral analysis - not just the basic biometrics such as typing speed but the specific activities I perform, the sequence, the context and so on - building up a more holistic picture of the person in the chair.
Oh and if the systems are not entirely sure it is me in the chair, why not let me think I am doing stuff while in reality caching my inputs and faking what I see while waiting for me to build up sufficient additional assurance ... or quietly summoning Security.