Post-September 11, security professionals have been desperate to tighten security for both their company's premises and their data networks. Many have been considering the use of biometric technologies, but any talk of measuring biometrics is not concerned with the precision of zero or one, but rather about statistical sensitivities. Steve Mathews reviews the problems facing end users looking to implement such systems.
Biometrics is a security approach that offers great promise, but also presents users and implementers with a number of practical problems. While some of these are technical, and possess technical solutions (however difficult those solutions may be to implement), others are social and cultural.

In truth, social and cultural barriers are much more complicated to resolve, and need far more thought by would-be implementers – as well as the system manufacturers and suppliers – before they'll be overcome. Culturally, one size doesn't fit all, and that may well increase the cost and complexity of biometric solutions for the end user.

For some considerable time now, the 'personal identification' segment of the IT security industry has been trying to improve on the use of the identifier and password as the means of authenticating the user of an IT service. The problems of managing password-based systems, their weaknesses and the (now) classical ways of attacking or subverting such systems are well documented. Many consider that such simple authentication measures need to be reinforced, and refer to multi-authentication based upon a secret that you know (password), something that you have (a token) or something that you are (a biometric).

In the IT world, probably the most commonly implemented method for token authentication is the SecureID token (smart cards for mass transit rail systems and telephone cards are more numerous, although they don't really authenticate the user... possession of the token authorises the holder to have a use).

The introduction of advanced security technologies such as public key cryptography (better known as PKI, or Public Key Infrastructure) has increased the need to be able to store secret information (a private key) because a user could never remember a randomly constructed password that long (RSA 2048 would require you to remember a mere 256 characters worth of information, and be able to input it reliably!).

Rapid increases in fraud – and in particular credit card fraud – are creating demands for greater security methods than magnetic stripe cards and handwritten signatures can offer. This has seen many card producers issue chip or smart cards which require a password (commonly a four-digit PIN) before they can be used. However, these are by no means generally implemented. A spot check on the cards in my pocket showed only 50% of the various bank/credit cards actually have chips, while none of the others enjoy that facility.

Why move to biometrics?
The principal pressure to move to biometrics comes from two sources: the biometric industry (not surprisingly) and the finance sector. The finance industries are continuing to search for a cost-effective means of reducing fraud. If that means can also be used to prove who authenticated the financial transaction, or would ensure that only the authorised individual could make it, then all the better.

The biometric industries clearly wish to see their commercial potential fulfilled. Since they form the so-called 'third pillar' of the security authentication process, there's a logical requirement for their services if you need to improve the quality of the security functionality of a system. Exactly how the quality is improved in some mathematical calculation is less clear, although work has been done by the UK security agency CESG to consider how it might be represented.

Overall, however, it's obvious enough that deploying more than one mechanism to authenticate a user is going to make the system stronger – provided that the mechanism is effective and not related to any other mechanism being used.

Biometrics are all about measuring the specific characteristics of a person, including their voice, handwriting, fingerprint(s), face and both the retina and iris of the eye. In an ideal world you want to choose a characteristic of a person that has helpful measurement characteristics, such as 'unlikely to change', 'likely to prove unique', 'not invasive' and 'difficult to copy, or steal and reproduce'.

The desired result for the in-house professional specifying the use of biometric technologies is to have a situation where the characteristics never change, are unique, can be checked without the user feeling that they're exposing themselves to any special procedure and are impossible for attackers to copy.

Measurement without precision
Unfortunately, when we talk of measuring biometrics we're not talking about the precision of zero or one, but rather about statistical measures. Samples are taken of the biometric that's being measured, sample points analysed and then compared with information captured at a previous time. This is not, then, the absolute precision that we associate with digital computing, but about matching samples of information to a level that makes us confident they are indeed identical.

The extent to which we can make the measuring method accurate is related to the degree of invasiveness of the measuring method, both when the initial user assessment is made and the sample taken. The more precise the measurements, the more likely they are to yield the right result.

One of the hazards of biometrics is that measurements may often have to be made in less-than-ideal conditions. Voice is measured against the ambient background (ie a supermarket, a street or a sports arena), signatures may be checked where someone is standing up (sitting down, leaning, with a poor pen or with wet hands to boot), fingerprints taken when the finger is flat (misaligned, wet or dirty) and facial characteristics checked with glasses (sunglasses, no glasses, the colour of the ambient light, etc).

Measuring systems must allow for any or all of these hazards to be present, and yet still operate acceptably.

Sources of potential error create two measuring levels that biometrics build into their calculations: false acceptance and false rejection. As these figures imply, the measurement system is set up to allow for errors. Therefore, you have to understand that the operation of the system can be tuned to be more or less precise.

This is not the same thing as either knowing a secret or not, and not the same as whether you actually have a card in your possession or not. When the end user implements a biometric system, they must think long and hard about just how accurate it is going to be in operation.

“The presence of false acceptance and false rejection means that, some of the time (however small), the right person will be rejected and the wrong person could be accepted by the biometrics in place. The problem for the system operator is that the right

Does method of operation matter?
The method of operation has two distinct components which must be considered: what the person being authenticated must do in order to be able to use the service, and what the system operator has to do should failure occur.

The person being authenticated must have registered their bio-identity before it can be authenticated. Registration processes can be extremely complicated, not to say very inconvenient for the end user. This is particularly true if the user being registered is not familiar with what's happening, why it must be done and what safeguards they have over the use to which their bio-identity might subsequently be put.

Registration must try to register the biometric as accurately as possible (with respect to the measuring technique being used) or subsequent comparisons will be poor and may create administrative problems.

Once a person has been registered, you then have to think about how their bio-identity is checked and in what context. It may be socially acceptable to look into a special device for retina scanning in order to gain access to a highly secure military establishment when it's part of your function. The same may not be true when standing in line at a supermarket checkout. Also, you may not be able to wear certain types of contact lens.

Similarly, it may be acceptable for the police to check your fingerprint(s) when that's required by law, but less acceptable to have that demanded to verify a credit card transaction. Voice recognition may be fine if there's a private booth, or if the verification can be carried out as part of 'normal' conversation, but less so if special number or word-based sequences have to be called out loudly in a public arena.

These are social and cultural factors. In some countries or regions they may be acceptable, in others not. Collecting fingerprints may be unlawful in some countries unless you're an authorised Government agency. The fact that it may be acceptable in one location doesn't mean it will work anywhere else, because the users themselves may refuse to behave in a manner that allows the system to work.

Until now, we've been assuming that our bio-identification system is working perfectly, but unfortunately that's not always the case. As mentioned earlier, the information captured during registration may not have been perfect, while that which is captured at the point of verification may not be perfect – or may have changed in some way from how it was presented earlier. Have you looked at your own passport photograph in recent times?

The presence of false acceptance and false rejection means that, some of the time (however small), the right person will be rejected and the wrong person could be accepted by the biometrics in place. The problem for the typical system operator, however, is that the right person will be rejected occasionally by what might be presented as a 'foolproof' system.

Dealing with biometric 'lock-outs'
What procedures might the in-house professional have to put in place in order to deal with a situation where a perfectly valid user has been refused entry? Do you go for a 'Best of Three', and then lock them out after that? Do you have some other test that you can apply and, if so, what is it? What is the impact on the user?

Are they a customer that could refuse to use the service again, rather than an employee who may not enjoy such luxury of choice?

In any event, what's the overall impact on your internal administration, particularly if there's an equipment malfunction that's difficult to detect?

These are not problems for the company supplying the basic biometrics product. They are problems that implementers – ie you, the end user – will have to sort out for themselves. The answers are going to vary significantly according to the business purpose being served by the system, so there's no simple solution here until some sound experience has been gained in major pilot exercises.

Biometrics undoubtedly offer a valuable approach to extending current security technologies that make it far harder for fraud to take place by preventing ready impersonation of the authorised user. However, in order to make use of biometrics we need to register users – a procedure that may well be both costly and onerous – and we have to have a socially and/or culturally acceptable means of checking the biometric at the point of authentication.

In truth, these problems may also give rise to the need for safeguards over the very use of the biometric.

In specifying biometric technology, security professionals must be aware of the fact that they aren't measuring perfectly – and that many operational factors may cause them to fail. In such cases, administrative procedures to resolve operational failures may need to be put in place to prevent adverse customer reaction, bad publicity and failures in the degree of widespread public acceptability.