Module BIPS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 231

The West Bengal

National University of Juridical Sciences

STUDY MODULE
For

B.A. LL.B Xth Semester

Paper Name: Biometrics for Personal Identification


Paper Code: 126010

By

Dr. Ankit Srivastava/Dr. Pranabesh Sarkar


School of Forensic Sciences
WBNUJS, Kolkata
Table of Contents

UNIT CONTENT

1. Fundamentals of personal identification & Biometrics


1.1. Introduction of personal identification
UNIT - 1 1.2. Biometric and Forensic Science
1.3. Historical Background.
1.4. Biometric Identification process and Types of Biometrics

2. Fingerprint Biometrics
1.1 Historical Background of fingerprint identification.
UNIT - 2 1.2 Types of fingerprint pattern
1.3 Classification of fingerprint pattern.
1.4 Fingerprint recognition process

3. Ear Biometrics
3.1 Historical Background of Ear Biometrics
UNIT - 3 3.2 Physiology of ear
3.3 Classification of ear pattern
3.4 Ear-print recognition process

4. DNA identification
4.1 Historical Background of DNA for personal identification
UNIT - 4 4.2 Biology of DNA
4.3 Techniques of DNA
4.4 Case Law based on DNA evidence

5. New and developing forms of Biometric identification


5.1 Role of Iris Biometric in personal identification

UNIT - 5 5.2 Role of Veins Biometric in personal identification


5.3 Role of Palm Biometric in personal identification
5.4 Role of Facial Biometric in personal identification
5.5 Role of Voice Biometrics in personal identification

6. Biometrics in criminal trials and appeals


6.1 Future of Biometrics
UNIT - 6 6.2 Biometrics in criminal trials
6.3 Criminal appeals and biometrics
6.4 Significant cases
UNIT

1
BIOMETRICS IN PERSONAL IDENTIFICATION

Introduction of Personal Identification


Personal identification is defined as the determination of the individuality of a person. It also
refers to the process of linking an unknown personal object or material (which may be a whole
body, a skeleton, a finger print, a biological fluid, etc.) back to an individual of known identity.
It is a key issue in forensic anthropology and consists of the correct assignment of an identity
to an unknown corpse. Although it may seem an easy and automatic procedure, the mechanism
of identification encompasses several difficulties concerning the methods, and above all, the
significance of identification process in court. The process of identification is towards two
conclusions: exclusion and positive identification.

Complete and Partial Identification


Personal Identification may be classified as complete (absolute) and incomplete (partial):

Complete identification is the absolute fixation or exact specification of the individuality of a


person along with the determination of exact place in the community occupied by him. Partial
identification refers to the ascertainment of only some facts about the identity while others
remain still unknown.
Partial identification may assist in complete or total identification of a person. e.g., identified
as a female, Christian, about 40 years of age, having 6 feet height, whereabout regarding her
family, occupation etc. are not known.

Need of Personal Identification


In medico legal cases identification is very crucial in living as well as in dead.
1. In Living:
A. Civil Cases
(i) In impersonation or false personification cases in relation with:
Inheritance of property Pension
Life insurance Voting rights
Passport
(ii) Disputed identity in cases of divorce or nullity of marriage
(iii) Disputed Sex
(iv) Missing persons
(v) Lost memory patients

B. Criminal Cases
(i) Identification of accused in criminal cases of
assault murder
dacoity sexual offenses
Absconding soldiers
(ii) Interchange of new born babies in hospital
(iii) Criminal abortion
(iv) To fix up age of criminal responsibility and majority
(v) Impersonation in criminal cases

2. In Dead:
The need to identify the dead is obvious for social and medico-legal purposes. It is required in
cases of natural mass disasters like earth quakes, tsunamis, landslides, floods etc., and in man-
made disasters like bomb explosions, fire, air crash, building collapse, railway accidents or
bodies recovered from sea, rivers, canals, wells and in cases when the body is highly
decomposed or dismembered to deliberately conceal the identity of the individual.

Medico legal Aspects of Identity


Identification of living is usually carried out by the police. However, where medical knowledge
is required for elucidation of disputed facts, a medical examiner may be consulted. A medical
person is mainly concerned with the identification of dead bodies. Accurate identification is
mandatory for the establishment of corpus Delicti after homicide since unclaimed bodies,
portions of dead body or bones are sometimes brought to the doctor to support a false charge.

The term ‘Corpus Delicti’ means the body of offence or the body of crime. In a charge of
homicide, it includes:

(a) Positive identification of the dead body (victim) and


(b) Proof of its death by criminal act of the accused.
Experts who are involved in personal identification may include: pathologists, physicians,
dentists, anatomists, physical anthropologists, and experts in evaluation of various traces.

The interest of the community in the scene of death, after the discovery of remains or after a
mass disaster, is often overwhelming. The disturbance of scene by curiosity seekers or by ill-
trained police personnel may preclude not only accurate identification of bodies but also
complete collection of physical evidence. This invites the “Law of Multiplicity of Evidences’
to play its role wherever called for. The Supreme Court has laid down that in law, a conviction,
for an offence does not necessarily depend upon the ‘Corpus Delicti’ being proved. The cases
may be conceivable where the discovery of the dead body, from the very nature of the case, is
impossible. Therefore, it may be said that the existence of the dead body of the victim is no
doubt a proof positive of death but its absence not fatal to the trial of the accused for the
homicide. Indeed, any other view would place in the hands of the accused an incentive to
destroy the body after committing murder and thus secure immunity for his crime.

The examination of a person for the purpose of identification should not be undertaken without
obtaining his free consent, and at the same time it should be explained to him that the facts
noted might go in evidence against him. It should be remembered that consent given before the
police is of no account, and that the law does not oblige anyone to submit to examination
against his will and thus furnish evidence against himself.

Historical Record of Identification


In ancient times criminals were punished by mutilation and branding. This may be looked upon
as the first attempt toward subsequent identification. Branding disappeared more than a century
ago in Russia, for example, not until 1860 or thereabouts. In France, branding was abolished
at the end of the revolution, but was later reinforced and finally abolished in 1832. It had
already disappeared in Germany but Holland continued to employ it until 1854 and China until
1905.
Descriptions of wanted criminals were used as far back as the Egypt of the Ptolemy’s and in
the days of the Roman empire; and the system used then has a surprising similarity to the
‘portrait parle’ of today. They were plan less, unmethodical, and gave rise to serious mistakes.
About 1840, the Belgian statistician Quetelet stated that there are no two human beings in the
world of exactly the same size. This theory is said to have been used for the first time for
criminological purposes by Stevens, the warden of the prison in Louvain, who in 1860
proceeded to measure heads, ears, feet, breasts, and lengths of bodies of criminals.
The first accurate system for description of prisoners, also called portrait parle (French term)
(spoken picture), was devised by Bertillon. In its original form the portrait parle was divided
into four categories: (a) Determination of Colour (left eye, hair, beard and skin) (b)
Morphological determinations (Shape, direction and size of every part of the head) (c) General
determinations 7 Anthropology Forensic Anthropology Personal Identification (grade of
stoutness, carriage, voice and language, dress, social standing etc.) (d) Description of indelible
marks (scars, tattooing etc.).

Biometric and Forensic Science


Biometric recognition, or simply biometrics, refers to the automated recognition of individuals
based on their biological and behavioural characteristics. Examples of biometric traits that have
been successfully used in practical applications include face, fingerprint, palm print, iris,
palm/finger vasculature and voice. There is a strong link between a person and their biometric
traits because biometric traits are inherent to an individual. A typical biometric system can be
viewed as a ‘real-time’ automatic pattern matching system that acquires biological data from
an individual using a sensor, extracts a set of discriminatory features from this data and
compares the extracted feature set with those in a database in order to recognize the individual.
It is assumed that each feature set in the database (referred to as a template) is linked to a
distinct individual via an identifier, such as a name or an ID number. Comparison of the
extracted feature set and the template results in a score indicating the similarity between the
two feature sets. Assessment of the similarity of the feature sets may then be used to recognize
the individual.
In modern society, the ability to reliably identify individuals in real time is a fundamental
requirement in many applications including international border crossing, transactions in
automated teller machines, e-commerce and computer login. As people become increasingly
mobile in a highly networked world, the process of accurately identifying individuals becomes
even more critical as well as challenging. Failure to identify individuals correctly can have
grave repercussions in society ranging from terrorist attacks to identity fraud where a citizen
loses access to his own bank accounts and other personal information.

Using biometric data for classification and/or identification in forensic science dates back to
the turn of the 20th century. Biometrics as we know it today can be viewed as extension of
Bertillon's anthropometric approach, benefiting from automation and the use of additional
features. This chapter presents a historical and technical overview of the development and the
evolution of forensic biometric systems, used initially manually and then in a semi-automatic
way. Before focusing on specific forensic fields, we will define the area, its terminology and
draw distinctions between forensic science and biometrics.
Forensic science refers to the applications of scientific principles and technical methods to an
investigation in relation to criminal activities, in order to establish the existence of a crime, to
determine the identity of its perpetrator(s) and their modus operandi. It is thus logical that this
area was a fertile ground for the use of physiological or behavioural data to sort and potentially
individualize protagonists involved in offences. Although manual classification of physical
measures (anthropometry), and of physical traces left and recovered from crime scenes
(fingermarks, earmarks,) was largely successful, an automatic approach was needed to
facilitate and to speed up the retrieval of promising candidates in large databases. Even if the
term biometrics usually refers “to identifying an individual based on his or her distinguishing
characteristics", biometric systems in forensic science today aim at filtering potential
candidates and putting forward candidates for further one to one verification by a forensic
specialist trained in that discipline, in the following traditional typical cases (here exemplified
using fingerprints):
Case 1: A biometric set of features in question coming from an unknown individual (living or
dead), is searched against a reference set of known (or declared as such) individuals. In the
fingerprint domain, we can think of a ten-print to ten-print search based on features obtained
from a tenprint card (holding potentially both rolled and flap inked impression from fingers
and palms), compared to a database of ten-print cards.

Case 2: An unknown biometric set of features left in circumstances of interest to an


investigation, is searched against a reference set of known (or declared as such) individuals
based on the features available. We can think of a fingermark recovered from a crime scene
that will be searched against a database of ten-print cards. The converse is also possible,
meaning the search of the features from a new ten-print card against the database of
(unresolved) fingermarks.

Case 3: An unknown to unknown comparison resulting in the possible detection of series of


relevant incidents. For fingerprints, it would mean comparing latent prints to latent prints.

Both case 2 and case 3 involve biometric features (in physical or other forms) that can be left
on scenes relevant to an investigation. In forensic investigation, one of the main objectives is
to find marks associating an offender to an event under investigation. These marks can be either
left by the perpetrator during the event or found on the perpetrator after it. This mechanism of
“exchange” of marks is known under the misnomer of “Locard's exchange principle” in
reference to the French criminalist Edmond Locard. Forensic information can be found either
as physical marks, or as digital traces. Physical marks are made for example by the apposition
of fingers, ears or feet on any kind of surfaces, while digital traces are analog or digital
recordings typically from phone-tapping and security cameras. Face and speech biometrics,
and to some extent modalities captured at distance such as ear, iris and gait can be used as
digital traces in forensic science.
As a first distinction between biometrics and forensic science, it is important to stress that
forensic biometric systems are used in practice as sorting devices without any embedded
decision mechanism on the truthfulness of the identification (although we do see some
developments in that direction). Indeed, the search algorithms are deployed as sorting devices.
These ranking tools allow (at an average known rate of efficiency) presenting the user a short
list (generally 15 to 20) containing potentially the right candidate to a defined query. Here the
term ‘candidate’ refers to the result of a search against biometric features originating from
either individuals or marks (known or unknown). It is then the duty of the forensic specialist to
examine each candidate from the list as if that candidate was submitted through the regular
channels of a police inquiry. This first contrast shows that forensic biometric systems are
considered by forensic scientists as external to the inferential process that will follow.

The second distinction lies in the terminology, performance measures and reported
conclusions used in the processes. Although forensic biometric systems can be used in both
verification (one to one) or identification modes (one to many), depending on the circumstances
of the case, the identification mode can be seen as a series of verification tasks. The reported
conclusion by the forensic specialist when comparing an unknown to a known entry can take
different forms depending on the area considered.

In the fingerprint field, conclusions can take three states: individualization, exclusion or
inconclusive. The first two are categorical conclusions accounting for all possible entities on
the Earth. In other words, an individualization of a finger mark is a statement that associates
that mark to its designated source to the exclusion of all other fingers or more generally all
other friction ridge skin formations. Individualization is often presented as the distinguishing
factor between forensic science and other scientific classification and identification tasks.

In the fields of face or ear recognition carried out manually by skilled examiners, speaker
verification based on phonetic/linguistic analysis, dental analysis or handwriting examination,
the three conclusions described above will remain under the same definition, but probabilistic
conclusions will also be allowed on a grading scale both in favour or against identity of sources
with qualifiers such as: possible, probable or very likely. For a discussion of the adequacy of
the scale in forensic decision making refer to.

The principles and protocols regarding how these conclusions (outside the DNA area) can be
reached by a trained and competent examiner is outside our scope. However, the general
principles of the inference of identity of sources are treated in detail by Kwan or by Champod
et al. (for fingerprints). In all these areas, based on different features, the expert subjectively
weighs the similarities and dissimilarities to reach his/her conclusion. Nowadays the reliability
of these so-called \subjective disciplines" are being increasingly challenged, especially because
of
(i) the development of evidence based on DNA profiles governed by hard data and
(ii) the evolving requirements for the admissibility of evidence following the Daubert
decision by the Supreme Court of the USA1.
The absence of underpinning statistical data in the classic identification fields is viewed as a
main pitfall that requires a paradigm shift.
In the field of DNA, the strength of evidence is indeed generally expressed statistically using
case specific calculations linked to a likelihood ratio (defined later). In essence the process is
probabilistic although we do see some tendencies to remove uncertainty from the debate. It is
our opinion that inferences of sources across all forensic identification fields, when put forward
to a factfinder in court for example, must be approached within a probabilistic framework even
in areas that had been traditionally presented through categorical opinions such as fingerprints.
An approach based on the concept of likelihood ratio should be promoted.
In- deed, a likelihood ratio (LR) is a statistical measure that offers a balanced presentation of
the strength of the evidence. It is especially suitable for assessing the contribution of forensic
findings in a fair and balanced way. Note that we restrict our analysis to an evaluative context,
meaning that the forensic findings may be used as evidence against a defendant in court. There
is a wide scope of application of biometric systems in investigative mode (e.g., surveillance)
that we will not cover. Formally, the LR can be defined as follows:

LR = p (E j S; I)
P (E j ¹ S; I)

Where:
E: Result of the comparison (set of concordances and discordances or a similarity measure
such as a score) between the biometric data from the unknown source and the biometric data
from the putative source.
S: The putative source is truly the source of the unknown biometric features observed (also
known as the prosecution proposition).
¹ S: Someone else, from a relevant population of potential donors, is truly the source of the
unknown biometric features observed (also known as the defence proposition).
I: Relevant background information about the case such as information about the selection of
the putative source and the nature of the relevant population of potential donors.

This LR measure forces the scientist to focus on the relevant question (the forensic findings)
and to consider them in the light of a set of competing propositions. The weight of forensic
findings is essentially a relative and conditional measure that helps to progress a case in one
direction or the other depending on the magnitude of the likelihood ratio. When the numerator
is close to 1, the LR is simply the reverse of the random match probability (RMP) in a specified
population. In these cases, reporting the evidence through the RMP is adequate. However most
biometric features suffer from within individual variability facing an assessment of the
numerator on a case by case basis.
The performance measures for forensic science are obtained from the analysis of the
distributions of the LRs in simulated cases with given S and ¹ S. These distributions are studied
using a specific plot (called Tippett plot) that shows one minus the cumulative distribution for
respectively the LRs computed under S and the LRs computed under ¹ S. These plots also allow
study and comparison of the proportions of misleading evidence: the percentage of LR<1 when
the prosecution proposition S is true and the percentage of LR>1 when the defence proposition
¹S is true. These two rates of misleading results are defined as follows:

RMED: Rate of misleading evidence in favour of the defence: among all LRs computed under
the prosecution proposition S, proportion of LR below 1.
RMEP: Rate of misleading evidence in favour of the prosecution: among all LRs computed
under the defence proposition ¹S, proportion of LR above 1.
Whereas a LR is a case-specific measure of the contribution of the forensic findings to the
identity of sources, the Tippett plot and the associated rates (RMED, RMEP) provide global
measures of the efficiency of a forensic biometric system. LR based measures are now regularly
used in the forensic areas of speaker recognition, fingerprints, and DNA. That constitutes a
major difference compared to standard global measures of biometric performances based on
type I and type II error rates (e.g., Receiver Operating Characteristic (ROC) or Detection Error
Trade off (DET) curves). For a discussion on the limitations associated with these traditional
measures when used in legal proceedings, see.
The concept of identity of sources is essential and needs to be distinguished from the
determination of civil identity (e.g., assigning the name of a donor to a recovered mark), from
guidance as to the activities of the individual or its further unlawful nature. Forensic
comparisons aim initially at providing scientific evidence to help address issues of identity of
sources of two sets of biometric data; whether these data are coupled with personal information
(such as name, date of birth or social security number) is irrelevant for the comparison process.
From the result of this comparison and depending on the availability and quality of personal
information, then inference as to the civil identity can be made if needed. Likewise, there is a
progression of inferences between the issue of identity of sources towards their alleged
activities and offences. It is a hierarchical system of issues as described by Cook et al.

The forensic biometric comparison process aims at handling source level issues as its primary
task: the whole process is not about names or identity, but in relation to source attribution
between two submitted sets of features (respectively from a source 1 and a source 2).
A third distinction lies in the wide range of selectivity of the biometric data that can be
submitted due to varying quality of the material. Selectivity here can be seen as the
discrimination power of the features, meaning the ability to allow a differentiation when they
are coming from distinct sources. Some of the main modalities will be reviewed in the next
sections but there is an all-encompassing phenomenon that goes across modalities in varying
degrees. In the commission of a crime, contrary to usual biometric systems (for access control
e.g.,), it may not be possible to obtain high quality input biometric features - either for the
template or transaction data. These biometric data are limited by numerous factors such as: the
availability of the person and his/her level of cooperation, the invasiveness of the acquisition,
the various objects and positions one can take or touch while a crime is being committed. The
subjects make no effort to present their biometric data to the system in an ideal and controlled
way. Hence, whether the biometric data is acquired directly from individuals (living or dead),
from images (of individuals, part thereof or X-rays) or marks left by them following criminal
activities, the quality of the material available for the biometric comparison process, and thus
its selectivity, may vary drastically from case to case and so will the within-person variability.
The overall performance of the system is largely influenced by the quality of the input data
conditioned by the acquisition and environmental conditions as summarized in Table 1.1.
These factors are common in all biometric deployments, but forensic scenarios tend to
maximize their variability.

The last distinction we would like to stress upon is the range of comparisons that can be
undertaken in the forensic environment depending on the circumstances of the cases. The three
cases outlined initially all deal with comparisons of biometric information (with one side or
the other being known) but at differing levels of selectivity. The driving force here is more the
selectivity level associated with each compared biometric data sets, which can be similar (case
1 and case 3) or largely different (case 2). The availability of known information, such as the
name, the date of birth, the social security number (i.e. the personal data associated with each
compared biometric data set), associated with the biometric features is not directly relevant to
the comparison process. This information is although decisive to progress in the hierarchy, but
has no impact on the decision of the identity of sources, which is driven by the selectivity of
the compared biometric data. The distinction between mark and reference material in a forensic
case is that in general, marks are of lower quality than reference material (although the reverse
could also be true). This concept of selectivity that is driving the move from case 1 to case 3
is a continuum on both sides (source 1 and source 2). Essentially, we can expect performances
to degrade as we move down in selectivity levels.
Table: List of the factors affecting the selectivity of biometric information and thus, the
performances of biometric systems deployed in forensic applications
Acquisition Quality of the acquisition device (e.g., resolution). Amount of input
conditions information (e.g., a rolled inked fingerprint on a card versus a limited
poorly developed finger-mark on a curved surface). The availability of
multiple templates. The types of acquisition of both templates and
transaction data. The acquisition at distance, the target size, the object
movement, and the horizontal or vertical misalignments between the
device and the subject. Presence of corrupting elements (e.g., glasses,
beard, hair, clothes, or health condition - living or dead – of the subject).
The time interval between the acquisitions of both sets of biometric
material to be compared.
Environmental Background noise and uncontrolled conditions (e.g., illumination, noisy
conditions environment).
Data processing The choice of the feature extraction algorithms and their level of
automation (e.g., poor quality fingermarks may need to be manually
processed by skilled operator in order to guide the system as to the
relevant features). Efficiency of the detection and tracking algorithms
(e.g., face detection and tracking). The matching algorithms in place and
their hierarchy.
Operator The operator interaction with the system at all stages (from acquisition to
verification of candidates' lists).
Fig. General scheme of a forensic biometric system.

Historical Background
The use of Biometrics has been traced back as far as the Egyptians, who measured people to
identify them. Alphonse Bertillon, Chief of the criminal identification division of police
department in Paris, developed and practiced the idea of using a number of body measurements
to identify criminals in the mid-19th century. These measurements were written on cards that
could be sorted by height, length of arm or any other parameter. After this, the use of
fingerprints, ears, face etc. starts by many law enforcement agencies to determine the identity
of the criminals. Such technology now provides for the capture and processing of biometrics
information. In following table there is a year-wise description of biometrics technology.
YEAR DESCRIPTION

1858 First systematic capture of hand images for identification purposes was
recorded.
1875 Schwable was the first to invent a method to measure the external ear for
personal identification.
1879 Bertillon developed anthropometrics to identify individuals.
1892 Galton developed a classification system for fingerprints.
1896 Henry developed a fingerprint classification system.
1903 Bertillon system collapsed.
1936 Concept of using the iris pattern for identification was proposed.
1960 Face recognition became semi-automated.
1960 First modal of acoustic speech production was created.
1965 Automated signature recognition research began.
1969 FBI pushed to make fingerprint recognition an automated process.
1970 Face recognition took another step towards automation.
1970 Behavioural components of speech were first modelled.
1974 First commercial hand geometric system became available.
1975 FBI funded development of sensors and minutiae extracting technology.
1976 First prototype system for speaker recognition was developed.
1977 Patent was awarded for acquisition of dynamic signature information.
1985 Concept that no two irises are alike was proposed.
1985 Patent for hand identification was awarded.
1987 Patent stating that the iris can be used for identification was awarded.
1988 First semi-automated facial recognition system was developed.
1988 Eigenface technique was developed for face recognition.
1989 Iannarelli designed a useful primary and secondary classification system of
external ear.
1991 Face detection was pioneered, making real time face recognition possible.
1992 Biometrics consortium was established within U.S Government.
1993 Development of an iris prototype unit begins.
1993 Face recognition technology (FERET) program was initiated.
1994 First iris recognition algorithm was patented.
1994 Integrated automated fingerprint identification system (IAFIS) competition was
held.
1994 Palm system was benchmarked.
1995 Iris prototype becomes available as a commercial product.
1996 Hand geometry was implemented at the Olympic games.
1996 NIST began hosting annual speaker recognition evolutions.
1998 FBI launched CODIS (DNA forensic database).
1999 Study on the compatibility of Biometrics and machines readable travel
documents were launched.
1999 FBI‟s IAFIS major components became operational.
2000 First face recognition vendor test (FRVT 2000) was held.
2002 ISO/IEC standards subcommittee on Biometrics was established.
2002 M1 technical committee on Biometrics was formed.
2003 ICAO adopted blue print to integrate Biometrics into machine readable travel
documents.
2003 European Biometrics forum was established.
2004 DOD implemented ABIS.
2004 First state-wide automated palm print data base was deployed in the US.
2004 Face recognition grand challenge began.
2005 US patent on iris recognition concept expired.
2006 Phalguni adopted a simple geometric approach for ear recognition.
2006 Jeges automated model-based human ear identification.
2007 Yuan proposed ear detection based on skin-colour and contour information.
2007 Xiaoxun proposed symmetrical null space LDA for face and ear recognition.
2008 Xie introduced ear recognition using LLE and IDLLE Algorithm.
2009 Islam proposed Score Level Fusion of Ear and Face Local 3D features for fast
and Expression-Invariant Human Recognition.
2009 Ear Localization using Hierarchical Clustering was developed in IIT Kanpur.
2010 A Survey on Ear Biometrics was conducted in the West Virginia University.
2010 FBI funded for twin’s iris study in the UND.
2010 Shaped Wavelets for Curvilinear Structures for Ear Biometrics in the UK.
2011 Joshi studied on Edge Detection and Template Matching Approaches for
Human Ear Detection in India.
2011 Automated human identification using ear imaging was developed in Hong
Kong.
2012 An Efficient Ear Localization Technique was developed in IIT Kanpur.
2012 Region-Based features extraction in ear biometrics was developed in Malaysia.

Biometric Identification process and Types of Biometrics


"Biometrics" means "life measurement" but the term is usually associated with the use of
unique physiological characteristics to identify an individual. The application which most
people associate with biometrics is security.
However, biometric identification has eventually a much broader relevance as computer
interface becomes more natural. Knowing the person with whom you are conversing is an
important part of human interaction and one expects computers of the future to have the same
capabilities.

Biometric System
A Biometrics system is essentially a pattern recognition system that recognizes a person-based
mon some specific physiological or behavioural characteristic unique to that person.
Biometrics authentication is a major part in the information technology field and it refers for
automatic process.

CHARACTERISTICS OF A BIOMETRICS SYSTEM


Any human physiological or behavioural characteristic could be biometrics, provided it has the
following desirable properties:

1. Universal: Every person should have the characteristics.


2. Uniqueness: No two persons should be the same in terms of the characteristic
3. Permanence: The characteristic should be invariant with time
4. Collectability: The characteristics can be measured quantitatively
In practice, there are some other important requirements:
1. Performance: which refers to the achievable identification accuracy, the resource
requirements to achieve acceptable identification accuracy, and the working or environmental
factors that affect the identification accuracy.
2. Acceptability: which indicates to what extent people are willing to accept the biometrics
system.
3. Circumvention: which refers to how easy it is to fool the system by fraudulent techniques.

Biometrics Identification Systems


Associating an identity with an individual is called personal identification. The problem of
resolving the identity of a person can be categorized into two fundamentally distinct types of
problems with different inherent complexities:

Enrolment is done before biometrics can be used for identification, a trusted sample of the
biometrics trait should be captured using a biometrics sensor and pre-processed so that the
approach used for recognition can be applied to the sample.

Verification mode and Identification mode (more popularly known as recognition).

Verification Mode refers to the problem confirming or denying a person’s claimed identity
(Am I who I claim I am?), Identification (Who am I?). The system validates a person’s identity
by comparing the captured biometrics data with his/her own biometrics templates stored in the
system database. In such a system, a basic identity, usually a PIN, a user name, a smart card
etc. is accepted and a biometrics template of the subject taken is matched using a 1:1 matching
algorithm to confirm the person’s identity.

Identification Mode is a generic term, and does not necessarily imply either verification or
identification. All biometrics systems perform “recognition” to “again know” a person who has
been previously enrolled verification is a task where the biometrics system attempts to confirm
an individual’s claimed identity by comparing a submitted sample to one or more previously
enrolled templates.
Fig. Enrolment and recognition (verification and identification)
stages of a biometric system.

Modules of Biometrics System


A simple biometrics system consists of following six basic modules:
1. Portal is meant to protect some assets. An example of a portal is the gate at an entrance of
a building. If the user has been successfully authenticated and is authorized to access an object
then access is granted.
2. Central Controlling Unit receives the authentication request, controls the biometrics
authentication process and returns the result of user authentication.
3. Input Device is used for biometrics data acquisition. During the acquisition process user’s
liveness and quality of the sample may be verified.
4. Feature Extraction module processes the biometrics data. The output of the module is a set
of extracted features suitable for the matching algorithm. During the feature extraction process
the module may also evaluate quality of the input biometrics data.
5. Storage of Biometrics Templates is a kind of database. Biometrics templates can also be
stored on a user-held medium (e.g., smartcard). In that case a link between the user and his
biometrics template must exist (e.g., in the form of an attribute certificate).
6. Matching algorithm compares the current biometrics features with the stored template. The
desired security threshold level may be a parameter of the matching process. In this will be
yes/no answer.

Fig. Modules of Biometrics System

Biometric Types
Biometrics measures biological characteristics for identification or verification purposes of an
individual. Since IDs and passports can be forged, more sophisticated methods needed to be
put into place to help protect companies and individuals. There are two types of biometric
methods. One is called Physiological biometrics used for identification or verification
purposes. Identification refers to determining who a person is This method is commonly used
in criminal investigations. Behavioural biometrics is the other type. It is used for verification
purposes. Verification is determining if a person is who they say they are. This method looks
at patterns of how certain activities are performed by an individual.

Fig. Different Types of Biometrics Features

3.1 Physiological Type of Biometric


The physical characteristics of a person like finger prints, hand geometry, iris, face and DNA
are known as biometrics. Each biometric trait has its strengths and weaknesses.

3.1.1 Fingerprints
A fingerprint is a pattern of ridges and furrows located on the tip of each finger. Fingerprints
were used for personal identification for many centuries and the matching accuracy was very
high. Patterns have been extracted by creating an inked impression of the fingertip on paper.
Today, compact sensors provide digital images of these patterns. Fingerprint recognition for
identification acquires the initial image through live scan of the finger by direct contact with a
reader device that can also check for validating attributes such as temperature and pulse. Since
the finger actually touches the scanning device, the surface can become oily and cloudy after
repeated use and reduce the sensitivity and reliability of optical scanners. This method is
traditional and it gives accuracy for currently available Fingerprint Recognition Systems for
authentication.

Fig. Fingerprint Recognition

3.1.2 Hand Geometry


Hand geometry systems produce estimates of certain measurements of the hand such as the
length and the width of fingers. Various methods are used to measure the hand. These methods
are most commonly based either on mechanical or optical principle. The latter ones are much
more common today. The hand geometry is used for identification and recognition of a person.

Fig. Hand Geometry

Vascular Pattern Recognition


Vascular Pattern Recognition, also commonly referred to as Vein Pattern Authentication, is a
fairly new biometrics in terms of installed systems. Using near-infrared light, reflected or
transmitted images of blood vessels of a hand or finger are derived and used for personal
recognition. Different vendors use different parts of the hand, palms, or fingers, but rely on a
similar methodology. The vascular pattern of the human body is unique to a specific individual
and does not change as people age. Claims for the technology include that it

Is difficult to forge: Vascular patterns are difficult to recreate because they are inside the hand
and, for some approaches; blood needs to flow to register an image.
Is contact-less: Users do not touch the sensing surface, which addresses hygiene concerns and
improves user acceptance.

Fig. Vascular Pattern Recognition

Ear Biometrics
Using ear in personal identification has been interesting at least 100 years. Ear does not change
considerably during human life, on the other hand face changes more significantly with age
than any other part of human body. Ear features are relatively fixed and unchangeable. It has
been suggested that the shape of the ear and the structure of the cartilaginous tissue of the pinna
are distinctive. Matching the distance of salient points on the pinna from a land mark location
of the ear is the suggested method of recognition.
Fig. Ear Recognition

3.1.3 Iris
The iris begins to form in the third month of gestation and the structures creating its pattern are
largely complete by the eight months. Its complex pattern can contain many distinctive features
such as arching ligaments, furrows, ridges, crypts, rings, corona, freckles and a zigzag collaret.
Iris scanning is less intrusive than retinal because the iris is easily visible from several meters
away. Responses of the iris to changes in light can provide an important secondary verification
that the iris presented belongs to a live subject. Irises of identical twins are different, which is
another advantage.

Fig. Iris Pattern Recognition


Retinal Recognition
The retina biometrics is based on the analysis of the blood vessels at the back of the eye which
are again unique for the individuals. To take a retina scan a low intensity light is used to
captured the unique patterns of the retina.

Fig. Retina Recognition

3.1.4 Face
Facial recognition is the most natural means of biometric identification. The approaches to face
recognition are based on shape of facial attributes, such as eyes, eyebrows, nose, lips, chin and
the relationships of these attributes. As this technique involves many facial elements; these
systems have difficulty in matching face images.

Fig. Face Recognition


3.1.5 DNA
DNA (Deoxyribonucleic Acid) sampling is rather intrusive at present and requires a form of
tissue, blood or another bodily sample. This method of capture still has to be refined. So far,
the DNA analysis has not been sufficiently automatic to rank the DNA analysis as a biometric
technology. The analysis of human DNA is now possible within 10 minutes. As soon as the
technology advances so that DNA can be matched automatically in real time, it may become
more significant. At present DNA is very entrenched in crime detection and so will remain in
the law enforcement area for the time being.

Fig. DNA Makeup

3.2 Behavioral type of Biometric


Behaviour methods of identification pay attention to the actions of a person, giving the user an
opportunity to control his actions. Biometrics based on these methods takes into consideration
high level of inner variants (mood, health condition, etc), that is why such methods are useful
only in constant use. It includes keystroke, signature and voice.

3.2.1 Keystroke
Keyboard- is the part that helps us to communicate with computer. People use keyboard in
different ways. Some people type fast, some slow. The speed of the typing also depends on the
mood of a person and a time of a day. Biometric keystroke recognition – is a technology of
recognizing people from the way they are typing. It is rather important to understand that this
technology does not deal with “what” is written but “how” it is written.
Fig. Keystroke Recognition

3.2.2 Signature
The way a person signs his or her name is known to be characteristic of that individual.
Signature is a simple, concrete expression of the unique variations in human hand geometry.
Collecting samples for this biometric includes subject cooperation and requires the writing
instrument. Signatures are a behavioural biometric that change over a period of time and are
influenced by physical and emotional conditions of a subject. In addition to the general shape
of the signed name, a signature recognition system can also measure pressure and velocity of
the point of the stylus across the sensor pad.

Fig. Signature Recognition

3.2.3 Voice
The features of an individual's voice are based on physical characteristics such as vocal tracts,
mouth, nasal cavities and lips that are used in creating a sound. These characteristics of human
speech are invariant for an individual, but the behavioural part changes over time due to age,
medical conditions and emotional state. Voice recognition techniques are generally categorized
according to two approaches: 1) Automatic Speaker Verification (ASV) and 2) Automatic
Speaker Identification (ASI). Speaker verification uses voice as the authenticating attribute in
a two-factor scenario.

Fig. Voice Recognition

Gait Pattern Recognition


This is one of the newer technologies and is yet to be researched in more detail. Basically, gait
is the peculiar way one walks and it is a complex spatio- temporal biometrics. It is not supposed
to be very distinctive but can be used in some low security applications. Gait is behavioural
biometrics and may not remain the same over a long period of time, due to change in body
weight or serious brain damage. Acquisition of gait is similar to acquiring a facial picture and
may be acceptable biometrics. Since video-sequence is used to measure several different
movements this method is computationally expensive.

Fig. Gait Pattern Recognition


APPLICATIONS OF BIOMETRICS
In a number of applications in our vastly interconnected society. Questions like “Is she really
who she claims to be?", “Is this person authorized to use this facility?" or “Is he in the watchlist
posted by the government?" are routinely being posed in a variety of scenarios ranging from
issuing a driver's licence to gaining entry into a country. The need for reliable user
authentication techniques has increased in the wake of heightened concerns about security, and
rapid advancements in networking, communication and mobility. Thus, biometrics is being
increasingly incorporated in several different applications. These applications can be
categorized into three main groups:

Table: Different application of Biometric characteristics


FORENSICS GOVERNMENT COMMERCIAL

Corpse identification National ID card ATM


Criminal investigation Driver’s license; voter Access control; computer
registration login
Parenthood determination Welfare disbursement Mobile phone
Missing children Border crossing E-commerce; Internet;
banking; smart card
UNIT

2
Fingerprint Biometrics

A fingerprint is the representation of the epidermis of a finger. It consists of a pattern of


interleaved ridges and valleys. Fingertip ridges evolved over the years to allow humans to grasp
and grip objects. Like everything in the human body, fingerprint ridges form through a
combination of genetic and environmental factors. In fact, fingerprint formation is similar to
the growth of capillaries and blood vessels in angiogenesis. The genetic code in DNA gives
general instructions on the way skin should form in a developing fetus, but the specific way it
forms is a result of random events (the exact position of the fetus in the womb at a particular
moment, and the exact composition and density of surrounding amniotic fluid). This is the
reason why even the fingerprints of identical twins are different. Fingerprints are fully formed
(i.e. became stable) at about seven months of fetus development and finger ridge configurations
do not change throughout the life of an individual, except in case of accidents such as cuts on
the fingertips. This property makes fingerprints a very attractive biometric identifier.

2.1 Historical Background of fingerprint identification.


The impressions from the last finger joints are known as fingerprints. Using fingerprints to
identify individuals has become commonplace, and that identification role is an invaluable tool
worldwide. What some people do not know is that the use of friction ridge skin impressions
as a means of identification has been around for thousands of years and has been used in several
cultures. Friction ridge skin impressions were used as proof of a person’s identity in China
perhaps as early as 300 B.C., in Japan as early as A.D. 702, and in the United States since 1902.

 Ancient History
Earthenware estimated to be 6000 years old was discovered at an archaeological site in
northwest China and found to bear clearly discernible friction ridge impressions. These prints
are considered the oldest friction ridge skin impressions found to date; however, it is unknown
whether they were deposited by accident or with specific intent, such as to create decorative
patterns or symbols. In this same Neolithic period, friction ridges were being left in other
ancient materials by builders. Just as someone today might leave impressions in cement, early
builders left impressions in the clay used to make bricks.
 221 B.C. to A.D. 1637
The Chinese were the first culture known to have used friction ridge impressions as a means
of identification. The earliest example comes from a Chinese document entitled “The Volume
of Crime Scene Investigation—Burglary”, from the Qin Dynasty (221 to 206 B.C.). The
document contains a description of how handprints were used as a type of evidence.

During the Qin through Eastern Han dynasties (221 B.C. to 220 A.D.), the most prevalent
example of individualization using friction ridges was the clay seal. Documents consisting of
bamboo slips or pages were rolled with string bindings, and the strings were sealed with clay.
On one side of the seal would be impressed the name of the author, usually in the form of a
stamp, and on the other side would be impressed the fingerprint of the author. The seal was
used to show authorship and to prevent tampering prior to the document reaching the intended
reader. It is generally recognized that it was both the fingerprint and the name that gave the
document authenticity.

The use of friction ridge skin impressions in China continued into the Tang Dynasty (A.D.
617–907), as seen on land contracts, wills, and army rosters. It can be postulated that with the
Chinese using friction ridge skin for individualization and trading with other nations in Asia,
these other nations might have adopted the practice. For example, in Japan, a “Domestic Law”
enacted in A.D. 702 required the following: “In case a husband cannot write, let him hire an-
other man to write the document and after the husband’s name, sign with his own index finger”.
This shows at least the possibility that the Japanese had some understanding of the value of
friction ridge skin for individualization.

Additionally, in India, there are references to the nobility using friction ridge skin as signature:

In A.D. 1637, the joint forces of Shah Jahan and Adil Khan, under the command of Khan
Zaman Bahadur, invaded the camp of Shahuji Bhosle, the ruler of Pona (in the present-day
Maharashtra). The joint army defeated Shahuji, who was compelled to accept the terms of
peace:

Since the garrison (of Shahuji) was now reduced to great extremities. Shahuji wrote frequently
to Khan Bahadur in the humblest strain, promising to pay allegiance to the crown. He at the
same time solicited a written treaty ... stamped with the impression of his hand.

The above text is an example of the nobility’s use of palmprints in India to demonstrate
authenticity of authorship when writing an important document. It is believed that the use of
prints on important documents was adopted from the Chinese, where it was used generally, but
in India it was mainly reserved for royalty. The use of friction ridge skin as a signature in China,
Japan, India, and possibly other nations prior to European discovery is thus well documented.

 17th and 18th Centuries


In the late 17th century, European scientists began publishing their observations of human skin.
Friction ridge skin was first described in detail by Dr. Nehemiah Grew. In 1687, the Italian
physiologist Marcello Malpighi published “Concerning the External Tactile Organs”, in which
the function, form, and structure of friction ridge skin was discussed. Malpighi is credited with
being the first to use the newly invented microscope for medical studies. In his treatise,
Malpighi noted that ridged skin increases friction between an object and the skin’s surface;
friction ridge skin thus enhances traction for walking and grasping. In recognition of
Malpighi’s work, a layer of skin (stratum Malpighi) was named after him.
Although friction ridge skin had been studied for a number of years, it would be 1788 before
the uniqueness of this skin was recognized in Europe. J. C. A. Mayer, a German doctor and
anatomist, wrote a book entitled Anatomical Copper-plates with Appropriate Explanations,
which contained detailed drawings of friction ridge skin patterns. Mayer wrote, “Although the
arrangement of skin ridges is never duplicated in two persons, nevertheless the similarities are
closer among some individuals. In others the differences are marked, yet in spite of their
peculiarities of arrangement all have a certain likeness”. Mayer was the first to write that
friction ridge skin is unique.

 19th Century
In his 1823 thesis titled “Commentary on the Physiological Examination of the Organs of
Vision and the Cutaneous System”, Dr. Johannes E. Purkinje (1787–1869), professor at the
University of Breslau in Germany, classified fingerprint patterns into nine categories and gave
each a name. Although Dr. Purkinje went no further than naming the patterns, his contribution
is significant because his nine pattern types were the precursor to the Henry classification
system.

German anthropologist Hermann Welcker (1822–1898) of the University of Halle led the way
in the study of friction ridge skin permanence. Welcker began by printing his own right hand
in 1856 and then again in 1897, thus gaining credit as the first person to start a permanence
study.
Generally, the credit for being the first person to study the persistence of friction ridge skin
goes to Sir William James Herschel. In 1858, he experimented with the idea of using a
handprint as a signature by having a man named Rajyadhar Konai put a stamp of his right hand
on the back of a contract for road binding materials. The contract was received and accepted
as valid. This spontaneous printing of Konai’s hand thus led to the first official use of friction
ridge skin by a European.
Upon his appointment as Magistrate and Collector at Hooghly, near Calcutta, in 1877, Herschel
was able to institute the recording of friction ridge skin as a method of individualization on a
widespread basis. Herschel was in charge of the criminal courts, the prisons, the registration of
deeds, and the payment of government pensions, all of which he controlled with fingerprint
identification. On August 15, 1877, Herschel wrote what is referred to as the “Hooghly Letter”
to Bengal’s Inspector of Jails and the Registrar General, describing his ideas and suggesting
that the fingerprint system be expanded to other geographical areas. While proposing even
further uses of this means of individualization, the Hooghly Letter also explained both the
permanence and uniqueness of friction ridge skin. Henry Faulds was the first person to publish
in a journal the value of friction ridge skin for individualization, especially its use as evidence.
The scientific study of friction ridge skin was also taken up by a prominent scientist of the time,
Sir Francis Galton. Galton continued to take anthropometric measurements, and he added the
printing of the thumbs and then the printing of all 10 fingers. As the author of the first book on
fingerprints, Galton established that friction ridge skin was unique and persistent. Because
Galton was the first to define and name specific print minutiae, the minutiae became known as
Galton details.
In 1894, Sir Edward Richard Henry, Inspector General of Police for the Lower Provinces,
Bengal, collabo-rated with Galton on a method of classification for finger-prints. With the help
of Indian police officers Khan Bahadur Azizul Haque and Rai Bahaden Hem Chandra Bose,
the Henry classification system was developed. Once the classification system was developed
and proved to be effective, Henry wrote to the government of India asking for a comparative
review of anthropometry and fingerprints.
The first ever Finger Print Bureau in the world was established at Writer's Building at Calcutta
(now Kolkata) in the year 1897, on being convinced of the infallibility and reliability of finger
impression as means of identification, made this evidence admissible in the court of law under
section 45 of Indian Evidence Act (1872), and specifically mentioned therein.
A criminal case in Bengal in 1898 is considered to be the first case in which fingerprint
evidence was used to secure a conviction.
 20th Century
The first trial in England that relied on fingerprint evidence involved Inspector Charles
Stockley Collins of Scotland Yard. Collins testified to an individualization made in a burglary
case. That 1902 trial and subsequent conviction marked the beginning of fingerprint evidence
in the courts of England. In October 1902, Alphonse Bertillon, made an individualization in
Paris, France, with fingerprints. As a result of the case, Bertillon is given credit for solving the
first murder in Europe with the use of only fingerprint evidence.
In 1903, after several months of fingerprinting criminals upon their release, Captain James H.
Parke of New York state developed the American Classification System. The use of the
American Classification System and subsequent fingerprinting of all criminals in the state of
New York was the first systematic use of fingerprinting for criminal record purposes in the
United States.
In 1914, Dr. Edmond Locard published “The Legal Evidence by the Fingerprints”.
Henry Jackson was the first criminal in the world who was caught in 1902 on the basis of
fingerprints alone.
In the 1980s, the Japanese National Police Agency came up with its first automated electronic
matching system called "Automated Fingerprint Identification Systems (AFIS)". AFIS collects
fingerprints through sensors, and then the computer identifies the ridge patterns and minutia
points (using Henry's system) from digital fingerprints before finding match results. It allowed
law enforcement agencies around the world to remotely and instantaneously validate millions
of fingerprints. To further strengthen the cooperation between law enforcement agencies, the
FBI launched Integrated AFIS (IAFIS) in 1999. IAFIS processed about 14.5 million fingerprint
documents during the year after its inception. It allowed IAFIS to register and preserve civilian
fingerprints to track information about the person's license, job, or social care schemes. One
out of every six individuals, on average, have their data saved on the FBI's database.
Word's most prominent civil applications, such as India's "Aadhar" program, the US Biometric
program, the UK border monitoring initiative, utilize millions of rolled or slap fingerprints.
New fingerprint recognition methods are under development to boost the efficiency of such big
data applications.
2.2 Types of fingerprint pattern
Fingerprint identification is one of the most important criminal investigation tools due to two
features: their persistence and their uniqueness. A person’s fingerprints do not change over
time. The friction ridges which create fingerprints are formed while inside their mother’s womb
and grow as the baby grows. The only way a fingerprint can change is through permanent
scarring, which doesn’t happen very often. It isn’t just that fingerprints don’t change. In
addition, fingerprints are unique to an individual. Even identical twins have different
fingerprints. Your fingerprints are yours and yours alone, and they’ll be that way for the rest
of your life.
As friction ridges spread out across the surface of the developing fingers, they form one of
three patterns: an arch, a loop, or a whorl. Each pattern type can be broken down into several
sub-patterns, which will be discussed in this chapter. The pattern formed is dependent on the
dimensions of the volar pad, its size, shape, and position on the finger. Pattern type is a function
of the volar pad’s 3D regression combined with the proliferation of friction ridges. As early as
1924, it was hypothesized that volar pad height and symmetry influence pattern formation.
“High,” symmetrical volar pads form whorls. Asymmetrical volar pads form loops. And “low”
volar pads form arches. While the minute details, or minutiae, within your fingerprint are
unique to you, there is evidence to suggest your fingerprint pattern is inherited. As with eye
color or hair color, your fingerprint patterns may appear similar to those of your mother or
father. Besides genetic factors, environmental factors also play a role in inheritance. It is more
likely you inherited your parents’ volar pad formations and rate of friction ridge development
than the actual patterns themselves. The patterns we see on our fingerprints display what we
will call ridge flow, which is an illustrative method of describing how the friction ridges form
patterns.
Most fingerprint pattern types have one or more of the following features formed as a result of
ridge flow: the core and delta. The core of a fingerprint, like the core of an apple, is the center
of the pattern. It is the focal point around which the ridges flow. The second feature of most
fingerprints is the delta. A delta is an area of friction ridge skin where ridge paths flowing in
three different directions create a triangular pattern. These patterns appear similar to lake or
river deltas: areas where the flow diverges.
(a) (b) (c)

Fig: The three basic fingerprint pattern types: (a) arches, (b) loops, and (c) whorls.

About65 percent of the total population has loops, 30 percent have whorls, and 5 percent have
arches. Arches have ridges that enter from one side of the fingerprint and leave from the other
side with a rise in the center. Whorls look like a bull’s-eye, with two deltas (triangles). Loops
enter from either the right or the left and exit from the same side they enter.
The core is the center of a loop or whorl. A triangular region located near a loop is called a
delta. Some of the ridge patterns near the delta will rise above and some will fall below this
triangular region. Sometimes the center of the delta may appear as a small island. A ridge count
is another characteristic used to distinguish one fingerprint from another. To take a ridge count,
an imaginary line is drawn from the center of the core to the edge of the delta.

Fig: The core of a loop pattern. Fig: The delta of a loop pattern.
ARCHES:
Arches are the least common fingerprint pattern. They are found in approximately 5% of
fingerprints in the general population. A fingerprint arch is similar to an architectural arch, or
a wave. The friction ridges enter one side of the fingerprint, make a rise in the center, and exit
out the other side of the print. There are no deltas in an arch pattern. The core is indistinct in
most arches.

The arches are of two subtypes-


1. Plain Arch
2. Tented Arch

Plain Arch
The Plain Arch is the simplest of all fingerprint patterns and is formed by ridges entering from
one side of the print and exiting on the opposite side. These ridges tend to rise in the center of
the pattern, forming a wave-like pattern.

Fig: Plain Arch (A)

Tented Arch
The Tented Arch is similar to the Plain Arch except that instead of rising smoothly at the center,
there is sharp up thrust or spike, or the ridges meet at an angle less than 90 degrees.
Fig: Tented Arch (T)

LOOPS:
The loop pattern is the most common fingerprint pattern found in the population.
Approximately 60%–70% of fingerprints in the population are loops. Loops are patterns in
which the ridges enter on one side of the finger, make a U-turn around a core, and exit out the
same side of the finger. Illustrates a variety of fingerprints with loop patterns. If you think of
the loop as a physical structure, you can imagine that water poured into the core will flow out
only at one side of the print. A loop must also have at least one intervening, looping ridge
between the delta and the core. This looping ridge is known as a recurve, which is another word
to describe a ridge that makes a U-turn.

The loops are of two subtypes-


1. Radial Loop
2. Ulnar Loop

Radial Loop
Radial loops are loops that are slanted toward the radius, the inner bone of the forearm. Ridges
flow in the direction of the thumb.
Ulnar Loop
Ulnar loops are loops that are slanted toward the ulna, the outer bone of the forearm. The ulna
is the bone associated with the elbow. These fingerprints flow toward the little finger of the
hand.

Fig: Two types of loops

the three requirements for a loop pattern are


1. A sufficient recurve
2. One delta
3. A ridge count across a looping ridge

Fig: Basic characteristics of a loop


Whorls
The second most populous fingerprint pattern type is the whorl, which is found in
approximately 30%–35% of fingerprints in the population. A whorl is a circular pattern. Most
friction ridges in these patterns make complete circuits around a central core. A Plain Whorl
pattern must have Type Lines and a minimum of at least two deltas and a sufficient recurve in
front of each delta. A Plain Whorl has at least one ridge that makes a complete circuit. This
ridge may be in the form of a spiral, oval, circle or variant of a circle. Plain whorls are the most
common type of whorl. Some plain whorls resemble targets.
The core is well defined at the “bull’s-eye” of the target. Some resemble elongated, concentric
ellipses. A requirement for a plain whorl is that an imaginary line drawn from delta to delta
must cut through at least one recurving ridge.

Fig: A plain whorl. A line from delta-to-delta cuts through several recurving ridges

Composite
Composite Pattern are subdivided into 4 distinct groups.
1. Central Pocket loop
2. Lateral Pocket Loop
3. Twinned Loop
4. Accidental.

1. Central Pocket Loop


A Central Pocket pattern must have type lines, a minimum of
two Deltas and at least one ridge. The pattern tends to make a
complete circle. An imaginary line can be drawn between the
two Deltas and does not cross or touch a ridge inside the type
lines. One Delta appears to be substantially closer to the
center of the pattern than the other Delta.

Fig: Central Pocket Loop


2. Lateral Pocket Loop
One loop serves as side pocket to the other loop. This
pocket is formed by the downward bending on one side of
the ridges of the other loop before they recurve. The ridges
about the centre the ones containing the point of core of the
loops have their exit on the same side of delta.

Fig: Lateral Pocket Loop

3. Twined Loop
There are two distinct loops, one resting upon or encircling
the other and the ridges, containing the point of core have
their exit towards different deltas.

Fig: Twinned Loop

4. Accidental Loop
Accidental Whorls are very rare and unique and occur with a frequency of only one to three
percent. The fingerprint (bottom left) is an example of an Accidental Whorl because it does not
conform to any other definition, pattern or category type.

There are three types of prints found by investigators at a crime scene.


Patent fingerprints (Visible), are left on a smooth surface when blood, ink, or some other
liquid comes in contact with the hands and is then transferred to that surface.
Latent fingerprints (Invisible), or hidden prints, are caused by the transfer of oils and other
body secretions on to a surface. This type of fingerprint is invisible to naked eye and requires
additional processing in order to be seen. They can be made visible by dusting with powders
or making the fingerprints in some way more visible by using a chemical reaction. Fingerprints
of suspects are taken by rolling each of the 10 fingers in ink and then rolling them onto a ten
card that presents the 10 fingerprints in a standard format.
Plastic fingerprints are actual indentations left in some soft material. Plastic fingerprints are
three-dimensional impressions and can be made by pressing your fingers in fresh paint, wax,
soap, or tar. Just like patent fingerprints, plastic fingerprints are easily seen by the human eye
and do not require additional processing for visibility purpose.

Classification Systems of fingerprint pattern


Classification is the process of coding and organizing large amounts of information or items
into manageable, related subcategories or classes. A library classifies books both by organizing
them by genre and author and by assigning alphanumerical designations according to the
Dewey Decimal System. The number and letter combination on the spine of the book allows
us to quickly and easily find one book in a library of thousands.

Fingerprint classification is the process of organizing large volumes of fingerprint cards into
smaller groups based on fingerprint patterns, ridge counts, and whorl tracings. When an
individual is arrested, it is important to search the files for a duplicate of that fingerprint record
to verify a recidivist’s (repeat offender’s) identity. In the mid-1800s, prior to the advent of
fingerprint records, individuals were photographed for rogues’ galleries, which were
collections of mug shots. Bertillonage was another classification system based on
anthropometric measurements, but it fell out of favour by the turn of the twentieth century.
Many individuals give false names when arrested, or change their appearance, so the fingerprint
record becomes the only reliable verification of their identity.

Fingerprints were historically stored in filing cabinets according to their alphanumeric


classification designations. When an individual was arrested and fingerprinted, the fingerprint
card was classified and the filing cabinets searched according to that classification label. In
some agencies, they are still entered and searched by hand. The advent of the Automated
Fingerprint Identification System (AFIS), commonly known as the fingerprint computer, has
mostly negated the need for manual classification and filing of hard copies of fingerprint
records. Fingerprints are now recorded on a scanner (a “livescan” device) attached to a
computer. They are stored digitally, just like digital photographs. However, similar to learning
about the history of fingerprints, knowledge of historical classification systems gives us a better
understanding of pattern types and the analysis of friction ridge impressions. Many employers,
as well as fingerprint certification tests, require a working knowledge of the basic classification
schemes addressed in the following.

Henry Classification
Sir Edward Henry, Azizul Haque, and Chandra Bose developed the Henry Classification
System in 1897. The Henry system became the most widely used classification system in
English-speaking countries. Juan Vucetich also developed a classification system used in
Spanish-speaking countries. Prior to the advent of both the Vucetich and Henry systems,
Bertillon, Purkinje, Galton, and Faulds also worked on fingerprint classifications systems.
Classification systems have been modified and applied in countries such as Hungary, Portugal,
Prague, Germany, Japan, Spain, Holland, Italy, Russia, Mexico, Egypt, Norway, Cuba, Chile,
and France. Most of these systems involve analyzing the pattern types of the fingers and
assigning alphanumeric designations to each finger. In both the Henry and Vucetich systems,
the resulting classification resembles a fraction, with a numerator above a classification line
and a denominator below the classification line. There may be several sets of letters (both upper
and lower case) and numbers both above and below the classification line. There are six
components, or parts, to the Henry Classification System: the primary, secondary, sub
secondary, major, final, and key. This text will focus on examples of primary classification.
Primary classification assigns numerical value to only the whorl patterns present in the
fingerprint record. It is written as a fraction, but unlike a fraction, it is never reduced. One
number will appear in the numerator, and one number will appear in the denominator. The
fraction line is known as the classification line. Each finger is numbered from 1 to 10, starting
with the right thumb as finger number one, proceeding through the right index, right middle,
right ring, and right little fingers. The left thumb is finger number six, followed by the left
index, left middle, left ring, and left little fingers. The fingerprint card, also known as a tenprint
card, is numbered 1–10. The fingers are each assigned a point value if a whorl is found on that
finger. The point values decrease by half as you proceed through the remaining eight fingers.
For example, if there is a whorl located on the number one finger (the right thumb), it is
assigned a value of 16. If there is a whorl located on the number eight finger (the left middle
finger), it is assigned a value of two. The numerator is the sum of the point values for the even
numbered fingers plus one. The denominator is the sum of the point values for the odd
numbered fingers plus one.
Table: Finger Numbers and Point Values of Whorls in the Henry Classification System
Finger Finger Number Point Value of
a Whorl
Right 1 16
Thumb
Right Index 2 16
Right 3 8
Middle
Right Ring 4 8
Right Little 5 4
Left Thumb 6 4
Left Index 7 2
Left Middle 8 2
Left Ring 9 1
Left Little 10 1

Fig: The point values assigned to each finger using the Henry Classification System

The number one is added to both the top and bottom values in order to avoid a fraction that
reads 0/0. Therefore, if there are no whorls present in any of the 10 fingers, the primary
classification is 1/1 rather than 0/0. If every finger on the tenprint card is a whorl pattern, the
primary classification is 32/32. There are, in fact, 1024 possible variants of the primary
classification component of the formula.

NCIC Classification
The National Crime Information Center (NCIC), a division of the FBI’s Criminal Justice
Information Services (CJIS), is a national repository of computerized criminal justice
information. It was created in 1965 during the J. Edgar Hoover era and has since been expanded
and upgraded to its current incarnation, NCIC 2000, which launched in 1999. NCIC is
accessible by criminal justice agencies in all 50 states, Washington, DC, Puerto Rico, Guam,
the US Virgin Islands, and Canada. It includes 21 databases that include such information as
criminal records, missing persons, the sex offender registry, stolen property records, fugitive
records, orders of protection, and suspected terrorist activity. As of 2011, the FBI reported 11.7
million active records. The goal of NCIC is to provide an investigative tool not only to identify
property but also to protect law enforcement personnel and the public from individuals who
may be dangerous. NCIC also includes fingerprint classification information. It is important to
understand NCIC classification in order to decipher these codes if you work in any capacity
within the criminal justice system. The NCIC fingerprint classification system applies a 2-letter
code to each pattern type. The 2-letter codes for each of the 10 fingers are combined to form a
20-character classification. Each fingerprint’s code is listed in sequence, from the number 1
finger (right thumb) to the number 10 finger (left little finger). Ridge counts of loops and whorl
tracings are also included in the coding system to further classify pattern types. If an individual
has plain whorls with a meet tracing on fingers 2–10 and a double loop whorl with an outer
tracing on finger number one, the NCIC classification would read as follows:
dOPMPMPMPM
PMPMPMPMPM

Table: NCIC Classification Codes


Pattern Type NCIC code

Ulnar loop ridge count 01–49


Radial loop ridge count +50 51–99

Plain arch AA
Tented arch TT
Plain whorl, inner tracing PI
Plain whorl, outer tracing PO

Plain whorl, meet tracing PM


Double-loop whorl, inner tracing dI

Double-loop whorl, outer tracing dO


Double-loop whorl, meet tracing dM
Central pocket loop whorl, inner tracing CI
Central pocket loop whorl, outer tracing CO
Central pocket loop whorl, meet tracing CM

Accidental, inner tracing XI


Accidental, outer tracing XO

Accidental, meet tracing XM


Mutilated or extensively scarred SR
Amputation XX

The NCIC classification of an individual with plain whorls (outer tracing) on his or her thumbs
and plain arches on his or her remaining fingers would be
POAAAAAAAA
POAAAAAAAA

NCIC is unique in that it does not include the actual fingerprint images. It only includes the
classification described previously. A working knowledge of NCIC fingerprint classification
codes can give you a wealth of information regarding the fingerprint patterns of the individual
queried in an NCIC search, including a tentative identification of a person of interest, such as
a suspect or a missing person.

Fingerprint recognition process


The technique of fingerprint identification, in both analogue and digital forms, is based on
differences within the standard patterns of the ridges. These can be classified into a series of
arches, loops and whorls. The centre of a pattern is referred to as the core, and points of
deviation referred to as the delta. The points of discontinuity in a fingerprint, where a ridge
branches or ends, are known as minutiae. Approximately 30 minutiae are used in the
fingerprinting technique. Fingerprinting has advanced significantly with digitalisation in the
twenty-first century. Optical scanners and algorithms are now used to record, digitally retrieve
and match fingerprint data; in contrast with the initial manual, card-based system. Automated
fingerprint databases of hundreds of millions of people have now been established. These are
fully automated, or only require human input at the final stage to distinguish between highly
similar fingerprints as part of a list of close matches to an unknown suspect in a law
enforcement investigation.
Since the mid-2000s, fingerprint identification has been widely used outside law enforcement,
with the first major development being the integration of biometric fingerprint identification
(along with facial recognition) into passports and border control systems. This was made a
requirement for foreign nationals and visa applicants in many countries, including the United
States in 2004, Japan and the United Kingdom in 2008, the European Union in 2011, and
Canada in 2013. It is also widely used across Africa, the Middle East and Asia. Non-
government organisations, such as the Office of the United Nations High
Commissioner for Refugees (UNHCR), also use fingerprint identification to identify refugees
in aid programs, using portable, battery powered devices in remote settings. Perhaps the largest
fingerprint identification database is the government administered Aadhaar database in India,
which includes more than 1.2 billion people for public administration purposes.
Over the past decade, fingerprint identification has been widely used outside law enforcement
and government. This includes for employee attendance and building access control; and in
personal devices such as smartphones and laptops. The introduction of fingerprint scanning
capabilities into smartphones has provided an opportunity to apply fingerprint identification
into a broader range of commercial applications – it is now common for personal banking to
be undertaken online with biometric fingerprint identification. Other developing applications
of fingerprint identification include within the handpiece of a firearm to ensure that it can only
be used by the registered owner.

Automated Fingerprint Identification Systems (AFIS)


Biometric fingerprint databases, known as Automated Fingerprint Identification Systems
(AFIS), were first established in the late 1990s, and these continue to be a primary method of
establishing identity in law enforcement and border protection contexts. A range of biometric
fingerprint databases have been established around the world. The United States introduced the
Integrated Automated Fingerprint Identification System (IAFIS) in 1999, transitioning to the
multimodal Next Generation Identification (NGI) system in 2011, which also includes
photographs, facial templates and criminal history and intelligence data. The NGI is operated
by the Federal Bureau of Investigation (FBI) and provides services to federal, state and local
law enforcement and national security agencies throughout the United States. The national
fingerprint database in the United Kingdom is known as IDENT1. A key difference in this
jurisdiction is that the database was developed as a joint venture between the Home Office and
the defence technology company Northrop Grumman in 2004. It provides a link between law
enforcement agencies across England, Wales and Scotland, as well as records in the Police
National Computer. In Australia, the national biometric fingerprint
database has operated since 2001. The National Automated Fingerprint Identification System
(NAFIS) provides Australian law enforcement, security and border agencies, with a centralised
national database for finger and palm print images. Data sharing arrangements have been
established between these countries, as well as Canada and New Zealand.
The comparison of fingerprints involves the identification of numerous minutiae within the
print. The more points that are compared, and the greater the degree of similarity, the more
persuasive the inference that can be draw regarding identity.
2.2 Fingerprint Sensing
Historically, in law enforcement applications, the acquisition of fingerprint images was
performed by using the so-called “ink-technique”: the subject's finger was spread with black
ink and pressed against a paper card; the card was then scanned by using a common paper-
scanner, producing the final digital image. This kind of process is referred to as off-line
fingerprint acquisition or off-line sensing (see Figure 2.1). A particular case of off-line sensing
is the acquisition of a latent fingerprint from a crime scene.

Fig: Fingerprint images acquired off-line with the ink technique


Nowadays, most civil and criminal AFISs accept live-scan digital images acquired by directly
sensing the finger surface with an electronic fingerprint scanner. No ink is required in this
method, and all that a subject has to do is to press his/her finger against the flat surface of a

Fig: The three fingerprint scanners used in FVC2006 and an image collected through
each of them

live-scan scanner. The most important part of a fingerprint scanner is the sensor (or sensing
element), which is the component where the fingerprint image is formed. Almost all the
existing sensors belong to one of the three families: optical, solid-state, and ultrasound.

Optical sensors: Frustrated Total Internal Reflection (FTIR) is the oldest and most used live
scan acquisition technique. The finger touches the top side of a glass prism, but while the ridges
enter in contact with the prism surface, the valleys remain at a certain distance; the left side of
the prism is illuminated through a diffused light. The light entering the prism is reflected at the
valleys, and absorbed at the ridges. The lack of reflection allows the ridges to be discriminated
from the valleys. The light rays exit from the right side of the prism and are focused through a
lens onto a CCD or CMOS image sensor.
Solid-state sensors: Solid-state sensors (also known as silicon sensors) became commercially
available in the middle 1990s. All silicon-based sensors consist of an array of pixels, each pixel
being a tiny sensor itself. The user directly touches the surface of the silicon: neither optical
components nor external CCD/CMOS image sensors are needed. Four main effects have been
proposed to convert the physical information into electrical signals: capacitive, thermal, electric
field, and piezoelectric.

Ultrasound sensors: Ultrasound sensing may be viewed as a kind of echography. A


characteristic of sound waves is the ability to penetrate materials, giving a partial echo at each
impedance change. This technology is not yet mature enough for large-scale production.

New sensing techniques such as multispectral imaging and 3D touch-less acquisition are being
developed to overcome some of the drawbacks of the current fingerprint scanners including: i)
the difficulty in working with wet or dry fingers, ii) the skin distortion caused by the pressure
of the finger against the scanner surface, and iii) the inability to detect fake fingers.
The quality of a fingerprint scanner, the size of its sensing area and the resolution can heavily
influence the performance of a fingerprint recognition algorithm. To maximize compatibility
between digital fingerprint images and ensure good quality of the acquired fingerprint
impressions, the US Criminal Justice Information Services released a set of specifications that
regulate the quality and format of both fingerprint images and FBI compliant off-line/live-scan
scanners. Unfortunately, the above specifications are targeted to the forensic applications
(AFIS sector) and as of today no definitive specifications exist for the evaluation/certification
of commercial fingerprint scanners.

2.3 Feature extraction


In a fingerprint image, ridges (also called ridge lines) are dark whereas valleys are bright.
Ridges and valleys often run in parallel; sometimes they bifurcate and sometimes they
terminate. When analyzed at the global level, the fingerprint pattern exhibits one or more
regions where the ridge lines assume distinctive shapes. These regions (called singularities or
singular regions) may be classified into three typologies: loop, delta, and whorl. Singular
regions belonging to loop, delta, and whorl types are typically characterized by \, ¢, and O
shapes, respectively. The core point (used by some algorithms to pre-align fingerprints)
corresponds to the center of the north most (uppermost) loop type singularity.
Fig: Ridges and valleys in a fingerprint image; b) singular regions (white
boxes) and core points (circles) in fingerprint images.

At the local level, other important features, called minutiae can be found in the fingerprint
patterns. Minutia refers to the various ways in which the ridges can be discontinuous. For
example, a ridge can abruptly come to an end (termination), or can divide into two ridges
(bifurcation). Although several types of minutiae can be considered, usually only a coarse
classification (into these two types) is adopted to deal with the practical difficulty in
automatically discerning the different types with high accuracy.

Fig: Termination (white) and bifurcation (gray) minutiae in a sample fingerprint


Fig: Graphical representation of fingerprint feature extraction steps and their
interrelations

2.3.1 Local ridge orientation and frequency


The local ridge orientation at point (x; y) is the angle µxy that the fingerprint ridges, crossing
through an arbitrary small neighbourhood centered at (x; y), forms with the horizontal axis.
Robust computation methods, based on local averaging of gradient estimates, have been
proposed by Donahue and Rokhlin, Ratha, Chen and Jain, and Bazen and Gerez. The local
ridge frequency (or density) fxy at point (x; y) is the the number of ridges per unit length along
a hypothetical segment centered at (x; y) and orthogonal to the local ridge orientation µxy.
Hong, Wan, and Jain estimate local ridge frequency by counting the average number of pixels
between two consecutive peaks of Gray-levels along the direction normal to the local ridge
orientation. In the method proposed by Maio and Maltoni, the ridge pattern is locally modelled
as a sinusoidal-shaped surface, and the variation theorem is exploited to estimate the unknown
frequency.

2.3.2 Segmentation
The segmentation task consists in separating the fingerprint area from the background. Because
fingerprint images are striated patterns, using a global or local thresholding technique does not
allow the fingerprint area to be effectively isolated. Robust segmentation techniques are
discussed in.

2.3.3 Singularity detection


Most of the approaches proposed in the literature for singularity detection operate on the
fingerprint orientation image. The best-known method is based on Poincare index. A number
of alternative approaches have been proposed for singularity detection; they can be coarsely
classified in: (1) methods based on local characteristics of the orientation image, (2)
partitioning-based methods, (3) core detection and fingerprint registration approaches.

2.3.4 Enhancement and binarization


The performance of minutiae extraction algorithms and fingerprint recognition techniques
relies heavily on the quality of the input fingerprint images. In practice, due to skin conditions
(e.g., wet or dry, cuts, and bruises), sensor noise, incorrect finger pressure, and inherently low-
quality fingers (e.g., elderly people, manual workers), a significant percentage of fingerprint
images (approximately 10%) is of poor quality. The goal of a fingerprint enhancement
algorithm is to improve the clarity of the ridge structures in the recoverable regions and mark
the unrecoverable regions as too noisy for further processing. The most widely used technique
for fingerprint image enhancement is based on contextual filters. In contextual filtering, the
filter characteristics change according to the local context that is defined by the local ridge
orientation and local ridge frequency. An appropriate filter that is tuned to the local ridge
frequency and orientation can efficiently remove the undesired noise and preserve the true ridge
and valley structure.
2.3.5 Minutiae extraction
Most of the proposed methods require the fingerprint gray-scale image to be converted into a
binary image. The binary images obtained by the binarization process are submitted to a
thinning stage which allows for the ridge line thickness to be reduced to one pixel. Finally, a
simple image scan allows the detection of pixels that correspond to minutiae through the pixel-
wise computation of crossing number 1.

Fig A fingerprint Gray-scale image; b) the image obtained after enhancement and
binarization; c) the image obtained after thinning; d) termination and bifurcation
minutiae detected through the pixel-wise computation of the crossing number

Some authors have proposed minutiae extraction approaches that work directly on the
grayscale images without binarization and thinning. This choice is motivated by the following
considerations: i) a significant amount of information may be lost during the binarization
process; ii) thinning may introduce a large number of spurious minutiae; iii) most of the
binarization techniques do not provide satisfactory results when applied to low-quality images.
Maio and Maltoni proposed a direct Gray-scale minutiae extraction technique, whose basic
idea is to track the ridge lines in the Gray-scale image, by `sailing' according to the local
orientation of the ridge pattern. A post-processing stage (called minutiae filtering) is often
useful in removing the spurious minutiae detected in highly corrupted regions or introduced by
previous processing steps (e.g., thinning) [16].
2.4 Matching
Matching high quality fingerprints with small intra-subject variations is not difficult and every
reasonable algorithm can do it with high accuracy. The real challenge is matching samples of
poor quality affected by: i) large dis-placement and/or rotation; ii) non-linear distortion; iii)
different pressure and skin condition; iv) feature extraction errors. The two pairs of images in
It is evident that fingerprint images from different fingers may sometimes appear quite similar
(small inter-subject variations).

Fig: a) Each row shows a pair of impressions of the same finger, taken from the
FVC2002 DB1, which were falsely non-matched by most of the algorithms submitted to
FVC2002 [32]; b) each row shows a pair of impressions of different fingers, taken from
the FVC2002 databases which were falsely matched by some of the algorithms
submitted to FVC2002

The large number of existing approaches to fingerprint matching can be coarsely classified into
three families: i) correlation-based matching, ii) minutiae-based matching, and iii) ridge
feature-based matching. In the rest of this section, the representation of the fingerprint acquired
during enrolment is denoted as the template (T) and the representation of the fingerprint to be
matched is denoted as the input (I). In case no feature extraction is performed, the fingerprint
representation coincides with the grayscale fingerprint image itself.

7.2 Livescan
In order for the AFIS to perform its duties, the fingerprints must first be captured and converted
to a language the computer can read. If an inked tenprint card is available, that card can be
scanned into the AFIS computer on a flatbed scanner. A livescan device can also be used to
capture the necessary fingerprint images digitally. The livescan is similar to a document
scanner in that it creates a digitized replica of the fingerprint. The fingerprint will appear on
the screen in black and white and will appear similar to a tenprint card, with black ridges on a
white background. The fingerprints are taken using the same process for inked tenprint cards.
Each finger is rolled, starting with the number one finger, from nail to nail. Then the flats and
thumbs are recorded.

Fig: A fingerprint scanned at a livescan terminal appears on the screen of the computer
monitor

Many law enforcement agencies have livescan terminals. These may simply consist of a
scanner for recording fingerprint images, a computer monitor to view the images, and a
keyboard for recording demographic and criminal information. There are many vendors who
build and maintain different types of terminals. While they look different, most of these
terminals function similarly.

7.3 History of AFIS


The current, efficient use of fingerprints as a biometrics is only possible due to the advent of
the silicone microchip, and consequently the computer, in the early 1970s. A computer is a
device that can be programmed to carry out complex arithmetic equations or logical sequences
much faster than a human can and with greater accuracy. It can also perform multiple functions
at the same time. The computer therefore showed great promise for solving the problem of the
overwhelming volume of tenprint cards that required humans to classify, file, and search by
hand. Law enforcement agencies’ identification bureaus employed fingerprint examiners who
were responsible for classifying, filing, and searching for tenprint cards by hand. The FBI’s
identification bureau, known as the Identification Division, was established in 1924 “to provide
a central repository of criminal identification data for law enforcement agencies.
The original collection of fingerprint records contained 810,188 records.” The technicians
employed in the Identification Division were responsible for classifying, filing, and around the
country every day. The manpower necessary to sustain this ever-expanding repository would
soon reach critical mass. The AFIS computer was developed to address this problem. A
computer can store an abundance of information in a small space. It can perform simultaneous
functions, unlike a human who has to perform one task at a time. It can work quickly and
efficiently 24 h per day, 7 days per week, 365 days per year. As of 2012, the FBI’s criminal
fingerprint database known as Next Generation Identification (NGI), formerly known as
Integrated Automated Fingerprint Identification System (IAFIS), contained more than 70
million criminal files: over 700 million individual fingerprints. And it gets bigger every day.
Identification bureaus would have had to employ tens of thousands of individuals to keep up
with the volume of work. The United States, France, Canada, the United Kingdom, and Japan
all developed computer systems to address the needs of overwhelmed and unsustainable
identification bureaus. The Royal Canadian Mounted Police put a system into place in 1977.
The city of San Francisco was the first jurisdiction to use the AFIS on a routine basis in the
United States.

When the citizens of San Francisco were asked whether they wanted a fingerprint computer,
they approved the ballot proposition with an 80% majority. An AFIS system was installed in
1983 and a new unit, called Crime Scene Investigations, was formed. Crime scene investigators
were trained in the use of the system and searched their own cases when they recovered
fingerprints from crime scenes. Over the next 2 years, the “San Francisco experiment” resulted
in a dramatic increase in fingerprint identifications. Ten times as many fingerprints found at
crime scenes were identified to suspects over 2 years. The burglary rate decreased by 26% over
the next 4 years. The experiment was a great success that resulted in the proliferation of AFIS
systems in law enforcement agencies nationwide and around the world. In 1992, IAFIS
imported 32 million fingerprint cards into its database. In 1997, the United Kingdom installed
its own national AFIS database. Thirty years after the advent of AFIS, most law enforcement
agencies use the system routinely, and few fingerprint examiners are trained to classify prints
using the Henry system.
7.4 Tenprint Searches
One of the main functions of AFIS is to store, classify, and search tenprint records from arrests.
When an individual is arrested, his fingerprints are taken either in ink on a tenprint card or by
a livescan. The suspect’s palm prints and mug shot may also be taken at this time. Demographic
and other identifying information are recorded. Descriptions of other individualizing marks
such as scars, marks, and tattoos may also be recorded and/or photographed. The fingerprints
will then be classified, or coded, by the computer and searched in the database. If the computer
locates a candidate file with matching fingerprints, a tenprint examiner will confirm the match.
A tenprint examiner is an individual whose job is to confirm tenprint matches by the AFIS
computer and manage the database.
One of the most important aspects of this function is the coding of the fingerprint by the
computer. The fingerprint is coded when the computer picks out the minutiae in the fingerprint.
This is known as feature extraction. The AFIS computer is programmed with an algorithm that
recognizes minutiae. It can also recognize the direction of ridge flow, the distance between
ridges, and how many ridges are between minutiae. The computer searches through the
database and returns candidates that have similar features in similar positions. There can be
100 or more minutiae in a rolled fingerprint. In a latent print, however, there are far fewer
minutiae since it is deposited unintentionally on a variety of substrates. A latent fingerprint
may be a fingertip, delta, palm print fragment, or an entire hand.

7.5 Latent Print Searches


The second major function of AFIS is to enter, code, and search latent fingerprints against the
tenprint database (as well as the unsolved latent [UL] database). A latent print, unlike a rolled
fingerprint, is deposited unintentionally. The fingerprint may be of poor quality, smudged, or
distorted. It may have been left on a curved or rough surface. It may be the side of a finger or
a fingertip. If a latent print is recovered from a crime scene, but there is no suspect to compare
it to, it can be searched in the AFIS database. It is up to the latent print examiner to determine
whether a latent fingerprint will be searched in AFIS. In order to search a latent print against
the AFIS tenprint database, the database of all known fingerprint records in a given jurisdiction,
the latent print must be first entered into the computer. The fingerprint can be either scanned
on a document scanner or photographed.
The latent print will appear on the computer screen. The latent print examiner will enter the
case number, agency, and any other data pertaining to the case at hand. The latent print
examiner can also instruct the computer which finger or hand to search, if known, and can also
tell the computer what fingerprint pattern type to search for. The latent print examiner rotates
the fingerprint so it is in the proper position with the top of the finger at the top of the screen.
The latent print is centered. In some systems, the core and delta(s) are selected. Next, the
computer codes the fingerprint minutiae. The latent print examiner double checks the minutiae
picked out by the computer. The examiner chooses to add more minutiae the computer missed.
Or she may deselect minutiae the computer picked that may be questionable or inaccurate.
When the latent print examiner is satisfied with the latent print, the search is launched. The

Fig: This latent lift was scanned into AFIS and identified to a suspect. The identification
is shown with (bottom) and without (top) minutiae selected
computer will return a list of candidates. A candidate is an individual whose exemplar
fingerprint is recognized by the computer as having a similar arrangement of minutiae as the
latent fingerprint. The latent print examiner then compares the latent print to the known prints
of each candidate. If a match is found, it is recorded, and another latent print examiner verifies
the match. It is important to know that the latent print examiner, not the computer, decides
whether an AFIS record matches an unknown fingerprint from a crime scene.
If the latent fingerprint does not match any records in the AFIS database, the latent print
examiner can then choose to search it against the UL database. This database stores all
unidentified latent fingerprints entered into the database. While this may not result in
identification to a suspect, it may lead to information linking several crime scenes. If the latent
print does not match another unidentified latent print, it can be saved, along with the
accompanying case data, into the UL database for future searches. Every new tenprint card
entered into AFIS is searched against the UL database. If the computer finds a match, the case
will be forwarded to a latent print examiner who will verify the identification. Approximately
10%–15% of all cases and 2%–3% of latent print searches result in a fingerprint match.

7.6 Future of AFIS


As we rely more and more on computing in our daily lives, we must remember that the
technology is only as good as the people who build the systems, write the algorithms, run the
machines, and conduct the analyses of the data. As with all computers, the future of AFIS is
smaller, faster, and more accurate systems. Coding algorithms will improve, resulting in
improved minutiae selection by the computer. Just as the computer became more affordable
with time, allowing individuals to own personal computers, so too do the AFIS computers
become more affordable for agencies that may not have been able to afford previous systems.
It is certain that as the population of the world increases, so will the number of arrests. AFIS
systems will have to keep up with this trend with faster turnaround times for searches and larger
database capacities. One way to increase the effectiveness and accuracy of AFIS is to provide
extensive training and retraining to anyone who uses AFIS: latent print examiners, law
enforcement officers, tenprint examiners, etc. This will ensure that the images captured are of
the highest quality, rolled from nail to nail, and that the input of data is thorough and consistent.
The success of AFIS depends on the people who use it every day. Only a fingerprint analyst
can identify a latent fingerprint. A computer cannot make the final determination about whether
a latent fingerprint and a known fingerprint come from the same source. It is also important to
remember that there are people behind these fingerprints: people with lives and families who
may depend on them. Just because a fingerprint is identified through the AFIS criminal
database does not mean that an individual is guilty of that particular crime. A guiding tenet of
our criminal justice system is that all people are innocent until proven guilty.

Fig: A mobile ID unit


UNIT

3
3.1 Historical Background of Ear Biometrics

Ear is an important component of the face. As early as 14th century B.C., artists have noted that

its morphology and relative position on the head have remained more or less constant throughout

time. Although various studies concerning anatomy and growth of external ear were carried out

by plastic surgeons and forensic scientists in different parts of the world, not match with the

anthropological attention that has been paid to the external ear. A brief review of the work has

already been done by various scientists all over the world on external ear. Some of the recent

studies on various aspects of sex, bilateral and ethnic determination are discussed here as follows:

Schwalbe (1875) described a total of 19 external ear landmarks, including general variations such

as ear inclination and protrusion, as well as ear measurements such as height and width of ear. In

studying various populations he recorded these features as well as general information such as

sex, age, body length and anthropometric measurements of the head.

Iannarelli (1989) examined over 10,000 ears and no indistinguishable ears were found. He had

created a 12 measurement “Iannarelli System”, used the right ear of people, specially aligned and

normalized the photographs. To normalize the pictures, they were enlarged until they fit to the

predefined easel. After that the measurements were taken directly from the photographs. The

distance between each of the numbered areas was measured and assigned an integer distance

value. The identification consists of the 12 measurements and the information about sex and race.

Carreira-Perpinan (1995) proposed outer ear (pinna) images for human recognition, based on

their discriminant capacity (only outperformed by fingerprints) and their small area and

variability (compared to face images). Compression neural networks were used to reduce the

dimensionality of the images and produce a feature vector of manageable size. These networks

are 2-layer linear perceptrons, trained in a self-supervised way (he used back propagation and
quick prop). A theoretical justification of the training process was given in terms of principal

components analysis. The approach was compared with standard numerical techniques forsingular

value decomposition. A simple rejection rule, based on the reconstruction error, for recognition

was proposed. Additional comments about the robustness of the network to image transformations

and its relation to auto associative memories were given.

Burge and Burger (1998) introduced a class of biometrics based upon ear features, which were

used in the development of passive identification systems. The viability of the proposed biometric

was shown both theoretically in terms of the uniqueness and measurability over time of the ear, and

in practice through the implementation of a computer vision based system. Each subject‟s ear was

modeled as an adjacency graph built from the Voronoi diagram of its curve segments. He introduced

a novel graph matching based algorithm for authentication which takes into account the erroneous

curve segments which can occur due to changes (e.g., lighting, shadowing, and occlusion) in the

ear image.

Moreno et. al. (1999) described three neural net approaches for the recognition of 2D intensity

images of the ear. Their testing used a gallery of 28 persons plus another 20 persons not in the

gallery. They were found a recognition rate of 93% for the best of the three approaches method.

Hoogstrate et. al. (2000) discussed the ability to identify persons by ear from surveillance

videotapes. The identification was through side-by-side comparison of surveillance videotapes.

They presented the results of a test, which were constructed to investigate whether the participants

could individualize suspects by ear in a closed set situation. In general the possibility to identify a

person by smaller body parts, in particular his/her ear, from surveillance videotape might become

a useful tool as the availability of images from surveillance cameras was currently rapidly

increasing. It was shown that the quality of the video images determines to a large extent the ability

to identify a person in this test.


Jain et al. (2000) presented that an ideal biometric should be universal, unique, permanent and

collectable. However in practice, a characteristic that satisfies all these requirements may not be

suitable for a biometric system. In biometric systems there are more requirements, e.g.

performance, acceptability and circumvention. Performance means system‟s accuracy and speed.

If the system was too slow and it makes too many mistakes, the system won‟t be used. Acceptability

was important: if the people don‟t accept the systems as a part of their daily routines, the system

won‟t be used. Circumvention was that how easy it was to fool the system. That rate should be very

low, otherwise the advantage of the system was low.

Brucker et. al. (2003) explored anatomic and aesthetic differences in the ear between men and

women, as well as changes in ear morphology with age. A total of 123 volunteers were randomly

selected for this study. The cohort consisted of 89 women ages 19 to 65 years (median age, 42

years) and 34 men ages 18 to 61 years (median age, 35 years). The average total ear height across

the entire cohort for both left and right ears was 6.30 cm, average lobular height was 1.88 cm, and

average lobular width was 1.96 cm. Based on head size, significant sex-related differences were

noted in the distance from the lateral palpebral commissure to both the helical root and insertionof

the lobule. Measured distances in both vectors were approximately 4.6 percent longer in men than

in women. Similarly, the height of the pinna was significantly larger in men than in women by

approximately 6.5 percent. The average height and width of the lobule, however, were nearly

identical in men and women. Analysis of age-related data showed a significant difference in the

total ear height between the subpopulations; however, this difference was not significant after the

lobular height was subtracted from total ear height, suggesting that the lobule was the only ear

structure that changed significantly with age. In addition, lobular width decreased significantly with

age. This study established normative data for ear morphology and clearly demonstrated the

changes in earlobe morphology that occur with advancing age.


Chang et. al. (2003) had suggested that the ear may have advantages over the face for biometric

recognition. Though their previous experiments with ear and face recognition using the standard

Principal Component Analysis approach showed lower recognition performance using ear images

and later they reported results of similar experiments on larger data sets that were more rigorously

controlled for relative quality of face and ear images. They found that recognition performance was

not significantly different between the face and the ear; for example, 69.3% versus 72.7%,

respectively, in one experiment. They also found that multi-modal recognition using both the ear

and face resulted in statistically significant improvement over either individual biometric; for

example, 90.9% in the analogous experiment. Kearney (2003) examined the external ear for

identification. He studied, both left and right ears of 153 subjects, 55 male, 98 female were

photographed and ear variations were documented. Though their previous description of the

different parts of the ear, known as ear landmarks, which were used to categorize ear types.

Comparisons were made between the left and right ears, males and females, according to age. It

was found that sexual dimorphism exists mostly in ears size rather than form and that the real

changes in size with age. The categories of ear types used were not precise in a successfully

distinguish all 301 ears included in this study from one another; however no two ears were found

to be exactly alike.

Chen and Bhanu (2004) explained that ear detection was an important part of an ear recognition

system. They introduced a simple and effective method to detect ears, which had two stages: offline

model template building and on-line detection. The model template was represented by an averaged

histogram of shape index. The on-line detection was a four-step process: step edge detection and

thresholding, image dilation, connect component labeling and template matching. Experiment

results with real ear images were presented to demonstrate the effectiveness of their approach.
Hurley et. al. (2004) defined feature space to reduce the dimensionality of the original pattern

space, whilst maintaining discriminatory power for classification. To meet this objective in the

context of ear biometrics a new force field transformation treats the image as an array of mutually

attracting particles that act as the source of a Gaussian force field. Underlying the force field there

was a scalar potential energy field, which in the case of an ear took the form of a smooth surface

that resembles a small mountain with a number of peaks joined by ridges. The peaks correspond

to potential energy wells and to extend the analogy the ridges correspond to potential energy

channels. Since the transform also turns out to be invertible, and since the surface was otherwise

smooth, information theory suggested that much of the information was transferred to these

features, thus confirming their efficacy. They previously described how field line feature extracted,

using an algorithm similar to gradient descent, exploits the directional properties of the force field

to automatically locate these channels and wells, which then form the basis of characteristic ear

features. They had shown how an analysis of the mechanism of this algorithmic approach lead to a

closed analytical description based on the divergence of force direction, which revealed that

channels and wells were really manifestations of the same phenomenon. Further it showed that this

new operator, with its own distinct advantages, had a striking similarity to the Marr-Hildreth

operator, but with the important difference that it was non-linear. As well as addressing faster

implementation and brightness sensitivity, the technique was also validated by performing

recognition on a database of ears selected from the XM2VTS face database, and by comparing the

results with the more established technique of Principal Components Analysis. That confirmed not

only that ears do indeed appear to have potential as a biometric, but also that the new approach was

well suited to their description, being robust especially in the presence of noise, and having the

advantage that the ear does not need to be explicitly extracted from the background.
Alvarez et. al. (2005) recognized a subject‟s ear, they aimed to extract a characteristic

vector from a human ear image that may subsequently be used to identify or confirm

the identity of the owner. Towards that end, a new technique, combining geodesic

active contours and a new ovoid model, had been developed, which can be used to

compare ears in an independent way of the ear location and size.

Chen et. al. (2005) explained that ear recognition approaches do not give theoretical

or experimental performance prediction. Therefore, the discriminating power of ear

biometric for human identification cannot be evaluated. They addressed two

interrelated problems: (a) proposes an integrated local descriptor for representation to

recognize human ears in 3D. Comparing local surface descriptors between a test and a

model image, an initial correspondence of local surface patches was established and

then filtered using simple geometric constraints. The performance of the proposed ear

recognition system was evaluated on a real range image database of 52 subjects.

(b) A binomial model was also presented to predict the ear recognition performance.

Choras (2005) explained that biometric identification methods proved to be very

efficient, more natural and easy for users than traditional methods of human

identification. In fact, only biometrics methods truly identify humans, not keys and

cards they possess or passwords they should remember. The future of biometrics will

surely lead to systems based on image analysis as the data acquisition is very simple

and requires only cameras, scanners or sensors. More importantly such methods could

be passive, which means that the user does not have to take active part in the whole

process or, in fact, would not even know that the process of identification takes place.

There are many possible data sources for human identification systems, but the

physiological biometrics seem to have many advantages over methods based on

human behaviour. The most interesting human anatomical parts for such passive,
physiological biometrics systems based on images acquired from cameras were face

and ear. Both of those methods contain large volume of unique features that allow to

distinctively identify many users and will be surely implemented into efficient

biometrics systems for many applications. They introduced to ear biometrics and

presented its advantages over face biometrics in passive human identification systems.

Then the geometrical method of feature extraction from human ear images in order to

perform human identification was presented.

Dewi and Yahagi (2006) proposed ear photo recognition using scale invariant key

points. The key points were extracted by performing Scale Invariant Feature Transform

(SIFT). In their experiments, SIFT generates approximately 16 key points for each ear

image. After they extracted the key points, they classified the owner of an ear by

calculating the number of key point matches and the average distance of the closest

square distance. They compared their results with ear photo recognition using PCA

and ear photo recognition using force field feature extraction. Their experimental

results showed that ear recognition using SIFT gives the best recognition result.

Jeges and Mate (2006) introduced a model based scheme for ear feature extraction,

implementation of which had proved that the method was strong enough to be

applicable in an identity tracking system.

Ali et. al. (2007) described ear as a new comer in biometric recognition techniques.

Various methods have been employed for ear recognition to improve the performance

and making the results comparable with other existing methods. In continuation to

these efforts, a new ear recognition method was proposed. Ear images were cropped

manually from the side head images. After that wavelet transform was used for feature

extraction and matching was carried out using Euclidean distance. Results achieved by

using the proposed method were up to 94.3%.


Bhanu (2007) has explained ear as new class of biometrics that has certain advantages

over face and fingerprint which are the two most common biometrics in both academic

research and industrial applications. An ear can be imaged in 3D and surface shape

information related to its anatomical structure can be obtained. This made it possible

to develop a robust 3D ear biometrics. That explored various aspects of 3D ear

recognition: representation, detection, recognition, indexing and performance

prediction. The experimental results on various large datasets would be presented to

demonstrate the effectiveness of the algorithms.

Chen and Bhanu (2007) proposed a complete human recognition system using 3D

ear biometrics. The system consists of 3D ear detection, 3D ear identification, and 3D

ear verification. For ear detection, they proposed a new approach which used a single

reference 3D ear shape model and locates the ear helix and the antihelix parts in

registered 2D color and 3D range images. For ear identification and verification using

range images, two new representations were proposed. Those included the ear

helix/antihelix representation obtained from the detection algorithm and the local

surface patch (LSP) representation computed at feature points. A local surface

descriptor was characterized by a centroid, a local surface type, and a 2D histogram.

The 2D histogram showed the frequency of occurrence of shape index values versus

the angles between the normal of reference feature point and that of its neighbors. Both

shape representations were used to estimate the initial rigid transformation between a

gallery-probe pair. This transformation was applied to selected locations of ears in the

gallery set and a modified Iterative Closest Point (ICP) algorithm was used to

iteratively refine the transformation to bring the gallery ear and probe ear into the best

alignment in the sense of the least root mean square error. The experimental results on

the UCR data set of 155 subjects with 902 images under pose variations and the
University of Notre Dame data set of 302 subjects with time-lapse gallery- probe pairs

were presented to compare and demonstrate the effectiveness of the proposed

algorithms and the system.

Bustard and Nixon (2008) described a new technique which improves the robustness

of ear registration and recognition, addressing issues of pose variation, background

clutter and occlusion. By treating the ear as a planar surface and creating a

homography transform using Scale Invariant Feature Transform (SIFT) feature

matched, ears can be registered accurately. The feature matched reduce the gallery size

and enabled a precise ranking using a simple 2D distance algorithm. When applied to

the XM2VTS database it gave results comparable to PCA with manual registration.

Further analysis on more challenging datasets demonstrated the technique to be robust

to background clutter, viewing angles up to ±13 degrees and with over 20% occlusion.

Lankarany and Ahmadyfard (2009) presented a new automatic algorithm based on

topographic labels. The proposed algorithm contains four stages. First topographic

labels from the ear image. Then using the map of regions for three topographic labels

namely, ridge, convex hill and convex saddle hill they build a composed set of labels.

The thresholding on this labeled image provides a connected component with the

maximum number of pixels which represents the outer boundary of the ear. As well

as addressing faster implementation and brightness insensitivity, the technique was

also validated by performing completely successful ear segmentation tested on

“USTB” database which contains 308 profile view images of the ear and its

surrounding backgrounds.

Abaza and Ross (2010) analyzed the symmetry of human ears in order to understand

thepossibility of matching the left and right ears of an individual, or to reconstruct

portions of the ear that may be occluded in a surveillance video. Ear symmetry was
assessed geometrically using symmetry operators and Iannarelli‟s measurements,

where the contribution of individual ear regions to the overall symmetry of the ear was

studied. Next, to assess the ear symmetry (or asymmetry) from a biometric recognition

system perspective, several experiments were conducted on the WVU Ear Database.

The experiments suggested the existence of some degree ofsymmetry in the human

ears that can perhaps be systematically exploited in the design of commercial ear

recognition systems. At the same time, the degree of asymmetry it offered may beused

in designing effective fusion schemes that combine the face information with the two

ears.

Prakash and Gupta (2012) proposed an efficient technique for automatic localization

of ear from side face images. The technique was rotation, scale and shape invariant and

made use of the connected components in a graph obtained from the edge map of the

side face image. It has been evaluated at IIT Kanpur database consisting of 2672 side

faces with variable sizes, rotations and shapes and University of Notre Dame database

containing 2244 side faces with variable background and poor illumination.

Experimental results revealed the efficiency and robustness of the technique.


3.2 Physiology of Human Ear

The human ear is the organ of hearing and equilibrium. It detects and analyzes sound by the
mechanism of transduction, which is the process of converting sound waves into
electrochemical impulses. Audition cannot take place adequately if the anatomy is abnormal.
This article will discuss the mechanisms implied in the conduction of sound waves into the
ear, and its integration and transmission from the middle ear and inner ear to the brain.
The human ear is a rudimentary shell-like structure that lies on the lateral aspect of the head.
The ear is a cartilaginous structure. For physiological study purposes, it subdivides into three
fundamental substructures: the external ear, the middle ear, and the inner ear.

 The outer ear, also called auricle, is composed of cartilage and it is the part with the
most contact with the external world. It has various anatomical demarcations like the helix,
the antihelix, the tragus, and the antitragus and these demarcations lead to a depression called
acoustic meatus. This meatus has a tube form and extends inward to end in the tympanic
membrane. Two-thirds of this canal are cartilaginous, and the last third is bone, and the two
external thirds have a lining with oil glands that produce cerumen to keep the canal clean from
insects and other objects. At the end of the outer ear, lies the middle ear, which is limited
externally by the tympanic membrane and internally by the oval window.

 The middle ear is an air-filled space. It divides into an upper and a lower chamber,
the epitympanic chamber (attic) and the tympanic chamber (atrium), respectively. It is like a
room because it has a rectangular-like shape. It has anatomical relations with the jugular vein,
the carotid artery, the inner ear, the eustachian tube, and the mastoid. The content of this room
consists of ossicles; the malleus, the incus and the stapes, namely. These bony structures are
suspended by ligaments which make them suitable for transmission of vibrations into the inner
ear. The vibrations that come into this part of the middle ear than get transmitted by the action
of the stapes, into the inner ear.

 The inner ear is a space composed of the bony labyrinth and the membranous
labyrinth, one inside the other. The bony labyrinth has a cavity filled with semicircular canals
that are in charge of sensing equilibrium; this cavity is called the vestibule and is the place
where the vestibular part of the VIII cranial nerve forms. The cochlea is the organ of hearing.
It takes its name from the Greek language that means the shell of a snail and is the part from
where the cochlear part of the VIII cranial nerve forms, thus constituting the vestibulocochlear
nerve.

Sound Wave Transmission and its Physics

The hearing is the process by which sound vibrations transform from the external environment
into action potentials. Vibrating objects produce sounds, like the strings of a guitar, and this
vibrations pressure pulses into air molecules, better known as sound waves. So, the ear is
equipped to distinguish different characteristics of sound, such as pitch and loudness; which
refers to the frequency of sound waves and the perception of the intensity of sound,
respectively. Frequency measurement is in hertz (Hz, cycles per second). The human ear can
detect frequencies from 1000 to 4000 hertz, but a young ear can hear frequencies in the range
between 20 and 20000 hertz. The intensity of sound is measured in decibels (dB); the range
of human hearing on a decibel scale is from 0 to 130 dB (where the sound becomes painful).
All these physical properties have to incur transformations to get into the central nervous
system. The first transformation consists of the conversion of air vibrations into tympanic
membrane vibrations. These vibrations then get transmitted into the middle ear and the
ossicles. Then these vibrations transform into liquid vibrations in the inner ear and the cochlea,
and these stimulate a region called the basilar membrane and the organ of Corti. Finally, these
vibrations get transformed into nerve impulses, which travel to the nervous system.

Note: But, in context of individual identification, authentication and Forensic


application w.r.t. ear biometrics, we concern only with the structure of external
ear.
3.3 Classification of Ear Pattern

MORPHOLOGY OF EXTERNAL EAR: In order to understand the anthropometry of


external ear, it is necessary to acquaint oneself with its anatomy. The external ear is also known
as the Pinna or Auricle, which is attached to the outside wall of the skull, with its centre

Fig. The Morphology of External Ear

positioned approximately halfway between the top of the head and the chin on the vertical
axis, and the eye and the nose of the face on the horizontal axis. The external ear varies from
the most minutiae to the greatest degree in reference to size, shape, design and anatomical
distance between various different features like helix rim, anti-helix, tragus, antitragus,
triangular fossa, crus of helix, concha, Incisure Intertragica, lobule and overall size (length
and width) of ear etc. These features are present in all people and provide variances according
to an ear can set apart from the others.
SHAPES OF EXTERNAL EAR: The long-held history of the use of ear shapes suggests its
use for automatic human identification. There are four types of shapes of external ear.
a) Oval

b) Triangular

c) Rectangular

d) Round

Fig. Ear shapes: (a) Oval (b) Triangular, (c) Rectangular (d) Round

ANATOMY OF THE EXTERNAL EAR

The anatomy of the external ear depicting the individual components can be seen in following
figure:

Helix Rim1a-1d: Helix rim is the outer frame of the auricle, it is a rolled up edge.
Lobule2: The fleshy lower portion of the ear.
Antihelix3: Elevated ridge of cartilage between the concha and the scapha is called antihelix.
Itis a folded "Y" shaped part of the ear.
Concha4: The hollow bowl like portion of the outer ear next to the canal. An enlarged concha
forces the outer ear away from the scalp.
Tragus5: The small projection just in front of the ear canal is called tragus.
Antitragus6: The lower cartilaginous edge of the conchal bowl just above the fleshy lobule of
the ear.
Crus of helix7: A landmark of the outer ear.
Fig. Anatomy of the external ear: (1) Helix Rim, (2) Lobule, (3) Antihelix, (4) Concha, (5)
Tragus, (6) Antitragus, (7) Crus of Helix, (8) Triangular Fossa and (9) Incisure Intertragica

Triangular Fossa8: A prominent depression that is observed in a triangular area formed by


thebranching of the antihelix into two crura is called triangular fossa.
Incisure Intertragica9: The distinctive “U” shaped known as incisures Intertragica between
the ear hole (meatus) and lobe.
Features of external ear had long been recognized as an important anthropological variable for
studying racial variability and for identifying few genetic abnormalities at an early stage of
life. During the last century, Prof. G. Schwalble (1875) was one of the first to invent a method
to measure the external ear and also was first to attract scientific attention to the racial
peculiarities in its structure. Since then, scientists systematically researched the shape and
pattern of auricle and complexity of the different features in Europe and other parts of the
world. Since the last century, the use of morphology of ear with other somatometric
parameters as a means of establishing identity of persons was being explored and used by
eminent French Criminologist Alphonse Bertillon and other prominent Forensic Scientists of
that time. Subsequently, Iannarelli did a detailed study on more than ten thousand ears. He
designed useful primary and secondary classification system of external ear.
3.4 Methods for Ear Identification/Recognition

At present, three methods are in use to collect ear samples for identification:

1. PHOTO COMPARISON

Taking photo of the ear is the most commonly used method in research. The most interesting
parts of the ear are the external ear and ear lobe, but the whole ear structure and shape can be
used. The photo can be taken and then combined with previous taken photos to identify an
individual. Alfred Iannarelli has made two large-scale ear identification studies in 1989. In the
first study, there were over 10,000 ears drawn from randomly selected samples in California.
The second study was for researching identical and non-identical twins. These cases support
the hypothesis about ear‟s uniqueness. Even the identical twins had similar, but not identical,
ear physiological features.

1a 1
2
8 1b
3 6
8 7
12 3
7 3 1c
8
5
4 11
1d 9
9 4
6 10

2 5

a b

Fig. (a) Anatomy of ear, (b) Measurements (a) 1 Helix Rim, 2 Lobule, 3 Antihelix, 4 Concha,
5 Tragus, 6 Antitragus, 7 Crus of Helix, 8 Triangular Fossa, 9 Incisure Intertragica.
(b) The locations of the anthropometric measurements used in the “Iannarelli System”.
2. EARMARKS (EARPRINTS)

Ear identification can be done from photographs or from video. There is another possibility
that ear can be pressed against some material, e.g. glass, and the „earmark‟ can be used further
for biometrics study. Ear prints have been used in forensic investigation since the mid 1960s.
Currently, it is estimated that in the Netherlands alone, ear mark evidence could be used in
approximately 50,000 cases of burglary per year.
An earprint is a two-dimensional reproduction of parts of the ear. Anatomical features
frequently found in a print are the helix, antihelix, tragus and antitragus and the transfer of
unique features onto a surface can be used for identification, for e.g. an ear fold, wrinkle
spotor mole. Ear prints can be lifted from crime scenes similar to the way fingerprints and
comparative analysis may then be performed using the crime scene mark and a control print
(from a suspect). The print can be lifted, photographed, analyzed and compared against the
earprints of several suspects. Based on the unique formation of the print and the ear features
transferred, a suspect may be positively identified.

Fig. view of earmarks (earprints)


3. THERMOGRAM PICTURES

In case the ear is partially occluded by hair, the hair can be masked out of the image by
using thermogram pictures. In the thermogram pictures different colors and textures can be
used to find different parts of ear. The ear is quite easy to detect and localizable using
thermogram imagery by searching high temperature areas.

Fig. Thermogram of an ear.

ADVANTAGES OF EAR BIOMETRICS


Biometrics identification methods proved to be very efficient, more natural and easy for
users than traditional methods of human identification. The future of biometrics will surely
lead to systems, based on image analysis as the data acquisition is very simple and requires
only cameras, scanners or sensors. More importantly, such methods could be passive,
which means that the user does not have to take active part in the whole process or, in fact,
would not even know that the process of identification takes place.
There are many possible data sources for human identification systems, but the
physiological biometrics seem to have many advantages over methods based on human
behavior. The most interesting human anatomical parts for such passive, physiological
biometrics systems based on images acquired from cameras are face and ear. Ear
biometrics are often compared with face biometrics. Ears have several advantages over
complete faces i.e. reduced spatial resolution, a more uniform distribution of color, and less
variability with expressions and orientation of the face. In face recognition there can be
problems with changing lightning, and different head positionsof the person. There are
same kinds of the problems with the ear, but the image of the ear is smaller than the image
of the face, which can be an advantage. Face can also change due to cosmetics, facial hair
and hair styling. Face changes due to emotions and expresses different states of mind like
sadness, happiness, fear or surprise. In contrast, ear features are relatively fixed and
unchangeable. Moreover, the color distribution is more uniform in ear than in human face,
iris or retina.

SCOPE OF EAR BIOMETRICS

Without a doubt, the use of Ear biometrics in private industry as well as government is real
and is here to stay. The security industry, law enforcement agencies, and other government
agencies where security is vital are constantly developing new ways of using biometrics to
help identify and monitor criminals and terrorists, and to secure access to sensitive data. In
addition to its current uses, it will not be long until biometrics finds its way into many
commercial applications such as e-commerce, automobiles, and cell phones. Many
companies have already started examining how biometrics and their applications will aid
their business and are planning to implement it sometime in the future.
In the immediate future thumb, hand and ear prints will remain the primary way in which
people are identified through biometrics. This is because many government agencies such
as the FBI, DEA, and INS already have large scale finger print databases. Even though it
is primarily government agencies that currently use finger prints to identify people, the
private industry will be next to use finger print technology to identify consumers because
of its low cost to maintain and its advanced development.
This type of system would greatly enhance the effectiveness of our nation‟s police
departmentsby freeing them from having to patrol as often and from having to search for
criminals. This type of application of biometrics is very distant, however, because the
technology for this is still developing, however the cost of implementing this technology
is very low among the othertechniques, and many people still have major privacy concerns.

Identity is important when it is weak. This apparent paradox is the core of the current study.
Traditionally, verification of identity has been based upon authentication of attributed and
biographical characteristics. After small scale societies and large scale industrial societies,
globalization represents the third period of personal identification. The human body lies at
the heart of all strategies for identity management. Therefore, to resolve this issue
application of BIOMETRICS is one of the most happening solutions. In particular,
biometrics requires criteria for identifying individuals in different contexts, under different
descriptions and at different times. In the course of modern history, personal identities have
been based on highly diverse notions such as religion, rank, class, estate, gender, ethnicity,
race, nationality, politics, virtue, honor, erudition, rationality, and civility. Many security,
investigation and health care organizations are in progress to deploy biometrics security
architecture. Secure identification is critical in the security and health care system, both to
control logic access to centralized archives of digitized individuals‟ data, and to limit
physical access to buildings and hospital wards, and to authenticate medical and social
support personnel. There is also an increasing need to identify criminals with a high degree
of certainty. All these issues require a careful ethical and political scrutiny. The general
problem of identity and the specific problem of personal identity do not admit easy
solutions. Yet we need some criteria to establish identities, either in the sense of qualitative
identities or in the sense of numerical identities. These criteria are chosen categories.
Consequently, in the present research work in lieu with Digital Image Processing
researcher aims to apply Ear Biometrics techniques to differentiate among Yadav and
Brahmin communities of Bundelkhand region on the basis of sex, bilateral and ethnic
differences.
UNIT

4
DNA Forensics

In the past few years, the general public has become more familiar with the power of DNA
typing as the media has covered efforts to identify remains from victims of the World Trade
Center Twin Towers collapse following the terrorist attacks of 11 September 2001, the O.J.
Simpson murder trial in 1994 and 1995, the parentage testing of Anna-Nicole Smith’s daughter
in 2007, and the ongoing Innocence Project that has led to the exoneration of over 200
wrongfully convicted individuals. News stories featuring the value of forensic DNA analysis
in solving crime seem commonplace today. In another popular application, DNA testing with
Y chromosome markers is now used routinely to aid genealogical investigations. In addition,
the medical community is poised to benefit from the tremendous amount of genomic DNA
sequence information being generated. DNA testing has an important role in our society that is
likely to grow in significance and scope in the future.
Though high-profile cases have certainly attracted widespread media attention in recent years,
they are only a small fraction of the thousands of forensic DNA and paternity cases that are
conducted each year by public and private laboratories around the world. The technology for
performing these tests has evolved rapidly over the past two decades to the point where it is
now possible to obtain results in a few hours on samples with only the smallest amount of
biological material. DNA typing, since it was introduced in the mid-1980s, has revolutionized
forensic science and the ability of law enforcement to match perpetrators with crime scenes.
Thousands of cases have been closed with guilty suspects punished and innocent ones freed
because of the power of a silent biological witness at the crime scene. Deoxyribonucleic acid
(DNA) is probably the most reliable biometrics. It is in fact a one-dimensional code unique for
each person. Since the mid-1990s, computer databases containing DNA profiles from crime
scene samples, convicted offenders, and in some cases, persons simply arrested for a crime,
have provided law enforcement with the ability to link offenders to their crimes. Application
of this technology has enabled tens of thousands of crimes particularly horrible serial crimes
by repeat offenders to be solved around the world.

4.1 Historical Background of DNA for personal identification

‘DNA fingerprinting’, or DNA typing (profiling) as it is now known, was first described in
1985 by an English geneticist named Alec Jeffreys. Dr. Jeffreys found that certain regions of
DNA contained DNA sequences that were repeated over and over again next to each other. He
also discovered that the number of repeated sections present in a sample could differ from
individual to individual. By developing a technique to examine the length variation of these
DNA repeat sequences, Dr. Jeffreys created the ability to perform human identity tests. These
DNA repeat regions became known as VNTRs, which stands for variable number of tandem
repeats. The technique used by Dr. Jeffreys to examine the VNTRs was called restriction
fragment length polymorphism (RFLP) because it involved the use of a restriction enzyme to
cut the regions of DNA surrounding the VNTRs. This RFLP method was first used to help in
an English immigration case and shortly thereafter to solve a double homicide case. Since that
time, human identity testing using DNA typing methods has been widespread. The past two-
and-a half decades have seen tremendous growth in the use of DNA evidence in crime scene
investigations as well as paternity and genetic genealogy testing.

Table: Historical development of DNA Profiling around the world


Year Forensic DNA Science & Application

1985 Alec Jeffreys develops multi-locus RFLP probes

1986 DNA testing goes public with Cellmark and Life codes in United
States
1988 FBI begins DNA casework with single-locus RFLP probes
1989 TWGDAM established; NY v. Castro case raises issues over quality
assurance of laboratories
1990 Population statistics used with RFLP methods are questioned; PCR
methods start with DQA1
1991 Fluorescent STR markers first described; Chelex extraction

1992 NRC, I Report; FBI starts casework with PCRDQA1


1993 First STR kit available; sex typing (amelogenin) developed

1994 Congress authorizes money for upgrading state forensic labs; ‘DNA
wars’ declared over; FBI starts casework with PCR-PM
1995 O.J. Simpson saga makes public more aware of DNA; DNA
Advisory Board setup; UK DNA Database established; FBI starts
using D1S80/amelogenin
1996 NRC II Report; FBI starts mtDNA testing; first multiplex STR kits
become available
1997 Thirteen core STR loci defined; Y-chromosome STRs described
1998 FBI launches national Combined DNA Index System; Thomas
Jefferson and Bill Clinton implicated with DNA
1999 Multiplex STR kits are validated in numerous labs; FBI stops testing
DQA1/PM/D1S80
2000 FBI and other labs stop running RFLP cases and convert to multiplex
STRs; PowerPlex 16 kit enables first single amplification of CODIS
STRs
2001 Identifiler STR kit released with 5-dye chemistry; first Y-STR kit
becomes available
2002 FBI mtDNA population database released; Y-STR 20plex published

2003 U.S. DNA database (NDIS) exceeds 1 million convicted offender


profiles; the U.K. National DNA Database passes the 2 million sample
mark

4.2 Biology of DNA

The basic unit of life is the cell, which is a miniature factory producing the raw materials,
energy, and waste removal capabilities necessary to sustain life. Thousands of different
proteins are required to keep these cellular factories operational. An average human being is
composed of approximately 100 trillion cells, all of which originated from a single cell (the
zygote) formed through the union of a father’s sperm and a mother’s egg. Each cell contains
the same genetic programming. Within the nucleus of our cells is a chemical substance known
as DNA that contains the informational code for replicating the cell and constructing the needed
proteins. Because the DNA resides in the nucleus of the cell, it is often referred to as nuclear
DNA. Some minor extranuclear DNA, known as mitochondrial DNA, exists in human
mitochondria, which are the cellular powerhouses. Deoxyribonucleic acid, or DNA, is
sometimes referred to as our genetic blueprint because it stores the information necessary for
passing down genetic attributes to future generations. Residing in every nucleated cell of our
bodies (note that red blood cells lack nuclei), DNA provides a ‘computer program’ that
determines our physical features and many other attributes. The complete set of instructions
for making an organism, that is, the entire DNA in a cell, is referred to collectively as its
genome.
DNA molecules store information in much the same way that text on a page conveys
information through the order of letters, words, and paragraphs. Information in DNA is stored
based on the order of nucleotides, genes, and chromosomes. DNA has two primary purposes:
(1) to make copies of itself so cells can divide and carry the same information; and (2) to carry
instructions on how to make proteins so cells can build and maintain the machinery of life.
Information encoded within the DNA structure itself is passed on from generation to generation
with one-half of a person’s DNA information coming from his or her mother and one-half
coming from his or her father.

4.2.1. DNA structure

Nucleic acids including DNA are composed of nucleotide units that are made up of three parts:
a nucleobase, a sugar, and a phosphate. The nucleobase or ‘base’ imparts the variation in each
nucleotide unit, while the phosphate and sugar portions form the backbone structure of the
DNA molecule. The DNA alphabet is composed of only four characters representing the four
nucleobases: A (adenine), T (thymine), C (cytosine), and G (guanine). The various
combinations of these four letters, known as nucleotides or bases, yield the diverse biological
differences among human beings and all living creatures. Humans have approximately 3 billion
nucleotide positions in their genomic DNA. Thus, with four possibilities (A, T, C, or G) at each
position, literally zillions of combinations are possible. The informational content of DNA is
encoded in the order (sequence) of the bases just as computers store binary information in a
string of ones and zeros.

Directionality is provided when listing a DNA sequence by designating the ‘five-prime’ (5’)
end and the ‘three-prime’ (3’) end. This numbering scheme comes from the chemical structure
of DNA and refers to the position of carbon atoms in the sugar ring of the DNA backbone
structure. A sequence is normally written (and read) from 5’ to 3’ unless otherwise stated. DNA
polymerases, the enzymes that copy DNA, only ‘write’ DNA sequence information from 5’ to
3’, much like the words and sentences in this book are read from left to right.

4.2.2 Base pairing and hybridization of DNA strands

In its natural state in the cell, DNA is actually composed of two strands that are linked together
through a process known as hybridization. Individual nucleotides pair up with their
‘complementary base’ through hydrogen bonds that form between the bases. The base-pairing
rules are such that adenine can only hybridize to thymine and cytosine can only hybridize to
guanine. There are two hydrogen bonds between the adenine – thymine base pair and three
hydrogen bonds between the guanine – cytosine base pair. Thus, GC base pairs are stuck
together a little stronger than AT base pairs. The two DNA strands form a twisted ladder shape
or double helix due to this ‘base-pairing’ phenomenon.
The two strands of DNA are ‘anti-parallel’; that is, one strand is in the 5’ to 3’ orientation and
the other strand lines up in the 3’ to 5’ direction relative to the first strand. By knowing the
sequence of one DNA strand, its complementary sequence can easily be determined based on
the base-pairing rules of A with T and G with C. These combinations are sometimes referred
to as Watson – Crick base pairs for James Watson and Francis Crick who discovered this
structural relationship in 1953.
Hybridization of the two strands is a fundamental property of DNA. However, the hydrogen
bonds holding the two strands of DNA together through base pairing may be broken by elevated
temperature or by chemical treatment, a process known as denaturation. A common method
for denaturing double stranded DNA is to heat it to near boiling temperatures. The DNA double
helix can also be denatured by placing it in a salt solution of low ionic strength or by exposing
it to chemical denaturants such as urea or formamide, which destabilize DNA by forming
hydrogen bonds with the nucleotides and preventing their association with a complementary
DNA strand. Denaturation is a reversible process. If a double-stranded piece of DNA is heated
up, it will separate into its two single strands. As the DNA sample cools, the single DNA strands
will find their complementary sequence and rehybridize or anneal to each other. The process
of the two complementary DNA strands coming back together is referred to as renaturation or
reannealing.

4.2.3 Chromosomes, genes, and DNA markers


Obtaining a complete catalogue of our genes was the focus of the Human Genome Project,
which announced a final reference sequence for the human genome in April 2003. In 2007,
DNA pioneers James Watson and Craig Venter released their fully sequenced genomes to the
public. The information from the Human Genome Project will benefit medical science as well
as forensic human identity testing and help us better understand our genetic makeup.
Within human cells, DNA found in the nucleus of the cell (nuclear DNA) is divided into
chromosomes, which are dense packets of DNA and protection proteins called histones. The
human genome consists of 22 matched pairs of autosomal chromosomes and two sex-
determining chromosomes. Thus, normal human cells contain 46 different chromosomes or 23
Fig: Base pairing of DNA strands to form double-helix structure

pairs of chromosomes. Males are designated XY because they contain a single copy of the X
chromosome and a single copy of the Y chromosome, while females contain two copies of the
X chromosome and are designated XX. Most human identity testing is performed using
markers on the autosomal chromosomes, and gender determination is done with markers on
the sex chromosomes. The Y chromosome and mitochondrial DNA, a small, multi-copy
genome located in cell’s mitochondria, can also be used in human identification applications.
Chromosomes in all body (somatic) cells are in a diploid state; they contain two sets of each
chromosome. On the other hand, gametes (sperm or egg) are in a haploid state; they have only
a single set of chromosomes. When an egg cell and a sperm cell combine during conception,
the resulting zygote becomes diploid. Thus, one chromosome in each chromosomal pair is
derived from each parent at the time of conception.
Mitosis is the process of nuclear division in somatic cells that produces daughter cells, which
are genetically identical to each other and to the parent cell. Meiosis is the process of cell
division in sex cells or gametes. In meiosis, two consecutive cell divisions result in four rather
than two daughter cells, each with a haploid set of chromosomes.
The DNA material in chromosomes is composed of ‘coding’ and ‘noncoding’ regions. The
coding regions are known as genes and contain the information necessary for a cell to make
proteins. A gene usually ranges from a few thousand to tens of thousands of base pairs in size.
One of the big surprises to come out of the Human Genome Project is that humans have fewer
than 30,000 protein-coding genes rather than the 50,000 to 100,000 previously thought. Genes
consist of exons (protein-coding portions) and introns (the intervening sequences). Genes only
make up ~ 5% of human genomic DNA. Non-protein- coding regions of DNA make up the rest
of our chromosomal material. Because these regions are not related directly to making proteins,
they have been referred to as ‘junk’ DNA although recent research suggests that they may have
other essential functions. Markers used for human identity testing are found in the noncoding
regions either between genes or within genes (i.e., introns) and thus do not code for genetic
variation. Polymorphic (variable) markers that differ among individuals can be found
throughout the noncoding regions of the human genome. The chromosomal position or location
of a gene or a DNA marker in a noncoding region is commonly referred to as a locus (plural:
loci). Thousands of loci have been characterized and mapped to particular regions of human
chromosomes through the worldwide efforts of the Human Genome Project. Pairs of
chromosomes are described as homologous because they are the same size and contain the
same genetic structure. A copy of each gene resides at the same position (locus) on each
chromosome of the homologous pair.
One chromosome in each pair is inherited from an individual’s mother and the other from his
or her father. The DNA sequence for each chromosome in the homologous pair may or may
not be identical since mutations may have occurred over time. The alternative possibilities for
a gene or genetic locus are termed alleles. If the two alleles at a genetic locus on homologous
chromosomes are different, they are termed heterozygous; if the alleles are identical at a
particular locus, they are termed homozygous. Detectable differences in alleles at
corresponding loci are essential to human identity testing.
A genotype is a characterization of the alleles present at a genetic locus. If there are two alleles
at a locus, A and a, then there are three possible genotypes: AA, Aa, and aa. The AA and aa
genotypes are homozygous, whereas the Aa genotype is heterozygous. A DNA profile is the
combination of genotypes obtained for multiple loci. DNA typing or DNA profiling is the
process of determining the genotype present at specific locations along the DNA molecule.
Multiple loci are typically examined in human identity testing to reduce the possibility of a
random match between unrelated individuals.
To help understand these concepts better, consider a simple analogy. Suppose you are in a room
with a group of people and are conducting a study of twins. You instruct each member of the
group to line up matched with his or her twin (homologue). You notice that there are 22 sets of
identical twins (autosomes) and one fraternal set consisting of one boy and one girl (sex
chromosomes). You have the twin pairs rearrange their line by average height from tallest pair
of twins to the shortest with the fraternal twins at the end and number the pairs 1 through 23
beginning with the tallest. Now choose a location on one twin, say the right forearm (locus).
Compare that to the right forearm of the other twin. What is different (allele)? Perhaps one has
a mole, freckles, more hair. There are several possibilities that could make them different
(heterozygous) or perhaps they both look exactly the same (homozygous).

4.2.4 Nomenclature for DNA markers


The nomenclature for DNA markers is fairly straightforward. If a marker is part of a gene or
falls within a gene, the gene name is used in the designation. For example, the short tandem
repeat (STR) marker TH01 is from the human tyrosine hydroxylase gene located on
chromosome 11. The ‘01’ portion of TH01 comes from the fact that the repeat region in
question is located within intron 1 of the tyrosine hydroxylase gene. Sometimes the prefix
HUM- is included at the beginning of a locus name to indicate that it is from the human
genome. Thus, the STR locus TH01 would be correctly listed as HUMTH01. DNA markers
that fall outside of gene regions may be designated by their chromosomal position. The STR
loci D5S818 and DYS19 are examples of markers that are not found within gene regions. In
these cases, the ‘D’ stands for DNA. The next character refers to the chromosome number, 5
for chromosome 5 and Y for the Y chromosome. The ‘S’ refers to the fact that the DNA marker
is a single copy sequence. The final number indicates the order in which the marker was
discovered and categorized for a particular chromosome. Sequential numbers are used to give
uniqueness to each identified DNA marker. Thus, for the DNA marker D16S539:
D16S539
D: DNA
16: chromosome 16
S: single copy sequence
539: 539th locus described on chromosome 16

4.2.4.1 Designating physical chromosome locations


The center region of a chromosome, known as the centromere, controls the movement of the
chromosome during cell division. On either side of the centromere are ‘arms’ that terminate
with telomeres. The shorter arm is referred to as ‘p’ (for petite) while the longer arm is
designated ‘q’ (because it comes after p in the alphabet). Human chromosomes are numbered
based on their overall size with chromosome 1 being the largest and chromosome 22 the
smallest. The complete sequence of chromosome 22 was reported in December 1999 to be over
33 million nucleotides in length. Since the Human Genome Project completed its monumental
effort, we now know the sequence and relative length of all 23 pairs of human chromosomes.
During most of a cell’s life cycle, the chromosomes exist in an unravelled linear form. In this
form, they can be transcribed to code for proteins. Regions of chromosomes that are
transcriptionally active are known as euchromatin.

Fig: Basic haploid chromosome structure and nomenclature

The transcriptionally inactive portions of chromosomes, such as centromeres, are


heterochromatin regions and are generally not sequenced due to complex repeat patterns found
therein. Prior to cell division, during the metaphase step of mitosis, the chromosomes condense
into a more compact form that can be observed under a microscope following chromosomal
staining. Chromosomes are visualized under a light microscope as consisting of a continuous
series of light and dark bands when stained with different dyes. The pattern of light and dark
bands results because of different amounts of A and T versus G and C bases in regions of the
chromosomes.

4.2.3 Types of DNA polymorphisms


DNA variation is exhibited in the form of different alleles, or various possibilities at a particular
locus. Two primary forms of variation are possible at the DNA level: sequence polymorphisms
and length polymorphisms. It is worth noting that since the completion of the Human Genome
Project (and now comparison of multiple completed human genomes) it has been discovered
that another layer of genetic variation is possible, known as copy number variants (CNVs),
where large segments of chromosomes can be inverted, duplicated, deleted, or even moved to
different regions of the genome. Thus, we are discovering that our genomes are more complex
than they were originally thought to be. A genotype is an indication of a genetic type or allele
state. A sample containing two alleles, one with 13 and the other with 18 repeat units, would
be said to have a genotype of ‘13,18.’ This shorthand method of designating the alleles present
in a sample makes it easier to compare results from multiple samples. In DNA typing, multiple
markers or loci are examined. The more DNA markers examined and compared, the greater
the chance that two unrelated individuals will have different genotypes. Alternatively, each
piece of matching information adds to the confidence in connecting two matching DNA
profiles from the same individual. If each locus is inherited independent of the other loci, then
a calculation of a DNA profile frequency can be made by multiplying each individual genotype
frequency together. This is known as the product rule.

Fig: Two primary forms of variation exist in DNA: (a) sequence polymorphisms; (b)
length polymorphisms. The short tandem repeat DNA markers discussed in this book are
length polymorphisms

4.2.4 Genetic variability

Large amounts of genetic variability exist in the human population. This is evidenced by the
fact that, with the exception of identical twins, we all appear different from each other. Hair
color, eye color, height, and shape all represent alleles in our genetic makeup. To gain a better
appreciation for how the numbers of alleles present at a particular locus impact the variability,
consider the ABO blood group. Three alleles are possible: A, B, and O. These three alleles can
be combined to form three possible homozygous genotypes (AA, BB, and OO) and three
heterozygous genotypes (AO, BO, and AB). Thus, with three alleles there are six possible
genotypes. However, because AA and AO are phenotypically equal as are BB and BO, there
are only four phenotypically expressed blood types: A, B, AB, and O.
Fig: Schematic representation of two different STR loci on different pairs of
homologous chromosomes

4.2.5 Recombination: shuffling of genetic material


Recombination is the process by which progeny derive a combination of genes different from
that of either parent. During the process of meiosis or gamete cell production, each reproductive
cell receives at random one representative of each pair of chromosomes, or 23 in all. Because
there are two chromosomes in each pair, meiosis results in 223, or about 8.4 million, different
possible combinations of chromosomes in human eggs or sperm cells. The union of egg and
sperm cells therefore results in over 70 trillion (223 X 223) different possible combinations —
each one representing half of the genetic material from the father and half from the mother. In
this manner, human genetic material is effectively shuffled with each generation producing the
diversity seen in the world today.

4.2.6 Introductory Genetic Principles


Genetics involves the study of patterns of inheritance of specific traits between parents and
offspring. Rather than study inheritance patterns in single families, much of genetics today
involves examining populations. Populations are groups of individuals, and they are often
classified by grouping together those sharing a common ancestry. Population genetics assesses
variation in the specific traits under consideration (e.g., STR alleles) among a group of
individuals residing in a given area at a given time. Thus, population genetics is the study of
inherited variation and its modulation in time and space. It is an attempt to quantify the
variation observed within a population group or among different population groups in terms of
allele and genotype frequencies. Great genetic variation exists within species at the individual
nucleotide level. For example, in humans several million nucleotides can differ between
individuals. In addition, recent comparative genomic studies have revealed that entire sections
of chromosomes can be deleted or duplicated. The genetic difference between individuals
within human population groups is usually much greater than the average difference between
populations.

4.2.6.1 Laws of Mendelian genetics


Gregor Mendel (1822 – 1884) is credited with being the ‘Father of Modern Genetics’ for his
mid-19th-century studies tracking multiple characteristics of pea plants through several
successive generations. Mendel correctly determined that each individual has two forms of
each trait (gene or DNA sequence) — one coming from each parent. The observations of
heredity that Mendel first described are now commonly referred to as Mendel’s laws of
heredity or Mendelian inheritance. These two laws are the law of segregation and the law of
independent assortment.
These basic laws or principles of genetics first described by Mendel form the foundation for
interpretation of DNA evidence. The law of segregation states that the two members of a gene
pair segregate (separate) from each other during sex-cell formation (meiosis), so that one-half
of the sex cells carry one member of the pair and the other one-half of the sex cells carry the
other member of the gene pair. In other words, chromosome pairs separate during meiosis so
that the sex cells (gametes) become haploid and possess only a single copy of a chromosome.
A mother contributes a single member of each of the 22 autosomal chromosomes, an X
chromosome, and her mitochondrial DNA (mtDNA). A father contributes a single member of
each of the 22 autosomal chromosomes and either an X or a Y chromosome (and no mtDNA).
Thus, the sex chromosome from the father’s sperm (X or Y) when it combines with the
mother’s egg (containing an X) determines the sex of the zygote — either X, X for female or
X, Y for male.
The law of independent assortment states that different segregating gene pairs behave
independently due to recombination where genetic material is shuffled between generations.
The law of segregation and the law of independent assortment are the basis for linkage
equilibrium and Hardy – Weinberg equilibrium that are tested for when examining DNA
population databases.
Fig: Human genome and inheritance. The haploid complement of chromosomes from a
female’s egg combines with the haploid chromosomal complement of a male’s sperm to
create a fully diploid zygote, which eventually develops into a child whose non gamete
cells each contain the same genome

4.2.6.2 Genetic pedigrees: a method to represent inheritance in families


Inheritance patterns are typically represented with pedigrees that are drawn to reflect family
relationships. The oldest generation, in this case the grandparents, is shown at the top of a
genetic pedigree. Males are represented as squares and females as circles. A horizontal line
connects two biological parents. A vertical line connects offspring to their parents. A diagonal
line through a square or circle indicates that the individual depicted is deceased (e.g., individual
#17). The genotypes from a single genetic marker — in this case, the STR locus FGA — are
shown within the squares and circles representing the family members in this pedigree. People
represented on this pedigree are labelled from #1 through #17 with the small number to the
upper left-hand corner of each square or circle. Individual #2 (the grandmother) has a genotype
of ‘23.2,25.’ The ‘23.2’ is a variant allele that typically occurs in less than 0.3% of the
population. Note that this 23.2 allele is transmitted to her son (individual #3), but not to her
two daughters (individuals #4 and #5). They get her other allele — the ‘25’, which typically
occurs around 7% of the time in Caucasian individuals. Children will get either one or the other
of their parent’s two alleles at every locus. Note also that the grandmother’s ‘23.2’ allele was
passed on to her grandson (individual #10) and her granddaughter (individual #11).
However, her other grandchildren (individuals #12 through #16) did not receive the 23.2
because neither of their parents had that particular allele. In this situation, all three children
(#12, #13, and #14) have different genotype combinations. As can be seen with this example,
if the children’s genotypes are known, then it is possible to determine the parent’s alleles and
genotypes. This is how parentage testing and other kinship analyses are performed.

Fig: (a) A three-generation family pedigree with results from a single genetic locus (STR
marker FGA). Squares represent males and circles females. (b) A Punnett square
showing the possible allele combinations for offspring of individuals #1 and #2 in the
pedigree. Individual #3 is 22,23.2 and inherited the 22 allele from his father and the 23.2
allele from his mother. (c) A Punnett square for one of the families in the second
generation showing possible allele combinations for offspring of individuals #4 and #7
4.3 Techniques of DNA

DNA is quite literally ‘the stuff of life’ as it contains all the information that makes us who we
are. Indeed, some biologists suggest that the sole reason for any organism’s existence is to
ensure that its DNA is replicated and survives into the next generation. It is found in the nucleus
of cells and also the mitochondria. Therefore, with the exception of certain specialized cells
that lack these organelles, such as mature red blood cells, every cell in the body contains DNA.
Furthermore, unless heteroplasmy occurs (see later), this DNA is identical in every cell, does
not change during a person’s lifetime and is unique (identical twins excepted) to that individual.
DNA is composed of four nucleotides (adenine, thymine, cytosine and guanine), and phosphate
and sugar molecules. Nuclear DNA takes the form of a ladder twisted into the shape of a double
helix in which the rails are composed of alternating sugar and phosphate molecules whilst the
nucleotides act as rungs joining the two rails together. Adenine is always joined to thymine and
cytosine is always joined to guanine. Within the nucleus, DNA is found in structures called
chromosomes. Human cells contain 23 pairs of chromosomes and these vary in shape and size.
Twenty - two of these pairs are referred to as the ‘autosomal chromosomes’ and these contain
the information that directs the development of the body (body shape, hair colour etc.). The
remaining pair of chromosomes are the X and Y ‘sex chromosomes’ that control the
development of the internal and external reproductive organs. Each chromosome contains a
strand of tightly coiled DNA. The DNA strand is divided into small units called genes and each
gene occupies a particular site on the strand called its ‘locus’ (plural ‘loci’). The total genetic
information within a cell is referred to as its ‘genome’. There are about 35 000 – 45 000 genes
and, on average, they each comprise about 3000 nucleotides although there is a great deal of
variation. These genes code for proteins that determine our hair and eye colour, the enzymes
that digest our food and every hereditable characteristic. Surprisingly, only a small proportion
of the genome actually codes for anything and between these coding regions lies long stretches
of repetitive non - coding regions that exhibit a great deal of variability.
Each gene exists in two alternative forms, called ‘alleles’, one of which is found in each of the
pair of chromosomes. If DNA profiling detects only one allele, this is usually interpreted as a
consequence of a person inheriting the same allele from both parents. If three or more alleles
are detected then this is an indication that the sample contains DNA from more than one
individual. Mitochondrial DNA is arranged slightly differently to nuclear DNA and will be
dealt with separately.
4.3.1 DNA sampling
Because virtually all of our cells contain DNA and we lose cells all the time, for example
whenever we blow our nose, brush our teeth or comb our hair, it is possible to isolate DNA
from a wide variety source. Indeed, it is so easy to leave a trail of DNA that crime scene
investigators must wear masks, and disposable over - suits and over - shoes to avoid
contaminating the location. DNA contamination can also occur via mortuary instruments so it
is preferable that samples are taken from a body before it is moved. Similarly, all DNA samples
need to be kept apart from the moment they are collected to avoid the possibility of cross
contamination. For example, if samples from a victim and a suspect are transported in the same
container (even if they are in separate bags) or processed at the same time, there is a risk of
cross contamination. As the sensitivity of DNA analysis increases, particularly with regard to
techniques such as low copy number STR analysis the risks of contamination during the
collection, storage and processing increase and the results need to be interpreted with care (see
later). For example, DNA can be recovered from bed sheets even if a person slept on them for
only one night. This means that one could prove that a man and a woman shared a bed but not
that they shared it at the same time. The collection, transport and/or storage of liquid and tissue
samples is sometimes problematic but this can be overcome by using commercially available
FTA ® cards that are produced by Whatman International Ltd. These cards contain chemicals
that lyse any cells in the sample and immobilize and stabilize the DNA and RNA that is
released. The cards also contain chemicals that preserve the DNA and RNA and thereby allow
the samples to be stored at room temperature for long periods. The cards can be pressed against
liquid samples (e.g.,saliva or semen) or the liquid can be dropped onto them. Tissue samples
or blood clots can be squashed onto the cards. When required for analysis the DNA or RNA
can be readily extracted. It should be noted that according to the UK 2004 Human Tissues Act
it is illegal to take a sample of a person’s DNA without their consent except under certain
conditions (e.g.,to prevent or detect a crime or to facilitate a medical diagnosis). It is therefore
illegal for a man to test DNA samples surreptitiously from his offspring to determine whether
he really is the father.
Potential sources of human DNA for forensic analysis

Body fluids: blood, semen, saliva, urine, faeces, vomit


Tissues: skin, bone, hair, organs, fingernail scrapings
Fingerprints
Weapons
Bites
Discarded chewing gum
Drug packages spat out after storage in the mouth
Cigarette butts
Handkerchiefs and discarded tissues
Used envelopes and stamps
Cutlery
Used cups, mugs, bottled or canned drinks
Clothing
Hairbrushes
Toothbrushes
Shoes and other footwear
Plasters
Used syringes

4.3.2 A Comparison of DNA Typing Methods


Technologies used for performing forensic DNA analysis differ in their ability to differentiate
two individuals and in the speed with which results can be obtained. The speed of forensic
DNA analysis has dramatically improved over the years. DNA testing that previously took 6
or 8 weeks can now be performed in a few hours.
The human identity testing community has used a variety of techniques including single-locus
probe and multi locus probe restriction fragment length polymorphism (RFLP) methods and
more recently polymerase chain reaction (PCR)-based assays. Numerous advances have been
made in the last quarter of a century in terms of sample processing speed and sensitivity. Instead
of requiring large bloodstains with well-preserved DNA, tiny amounts of sample — as little as
a few cells in some forensic cases — can yield a useful DNA profile. The various DNA markers
have been divided into four quadrants based on their power of discrimination, that is, their
ability to discern the difference between individuals, and the speed at which they can be
analyzed. New and improved methods have developed over the years such that tests with a
high degree of discrimination can now be performed in a few hours.
An ABO blood group determination, which was the first genetic tool used for distinguishing
between individuals, can be performed in a few minutes but is not very informative. There are
only four possible groups that are typed — A, B, AB, and O — and typically more than 80%
of the population is either type O or type A. Thus, while the ABO blood groups are useful for
excluding an individual from being the source of a crime scene sample, the test is not very
useful when an inclusion has been made, especially if the sample is type O or type A. On the
other extreme, multi-locus RFLP probes are highly variable between individuals but require a
great deal of labour, time, and expertise to produce and interpret a DNA profile. Analysis of
multi-locus probes (MLP) cannot be easily automated, a fact that makes them undesirable as
the demand for processing large numbers of DNA samples has increased. Deciphering sample
mixtures, which are common in forensic cases, is also a challenge with MLP RFLP methods,
which is the primary reason that laboratories turned to single locus RFLP probes used in serial
fashion.

Fig: Comparison of DNA typing technologies. Forensic DNA markers are plotted in
relationship to four quadrants defined by the power of discrimination for the genetic
system used and the speed at which the analysis for that marker may be performed
The best solution in the search for a high power of discrimination and a rapid analysis speed
has been achieved with short tandem repeat (STR) DNA markers. Because STRs by definition
are short, three or more can be analyzed at a time. Multiple STRs can be examined in the same
DNA test, or ‘multiplexed’. Multiplex STRs are valuable because they can produce highly
discriminating results and can successfully measure sample mixtures and biological materials
containing degraded DNA molecules. This method can significantly reduce the amount of
DNA required for analysis, thereby conserving more of the irreplaceable DNA collected from
forensic evidence for use by scientists from opposing counsel or for additional specialized
DNA testing. In addition, the detection of multiplex STRs is automated, which is an important
benefit as demand for DNA testing increases. Mitochondrial DNA (mtDNA), which is shown
in the quadrant with the lowest power of discrimination and longest sample processing time,
can be very helpful in forensic cases involving severely degraded DNA samples or when
associating maternally related individuals. There are instances when nuclear DNA is either so
degraded or present in such low amounts in forensic evidence samples (e.g., hair shafts or
putrefied bones or teeth) that it is either untestable or undetectable and mtDNA is the only
viable alternative forensic DNA technology that can produce interpretable data for forensic
comparisons. In other cases, such as the identification of skeletal remains, mtDNA is often the
preferred method for comparing DNA from the bone evidence to the known reference mtDNA
from potential family members to determine whether the mtDNA matches. In many situations,
multiple technologies may be used to help resolve an important case or identify victims of mass
fatalities, such as those from the World Trade Center collapse. When early methods for DNA
analysis are superseded by new technologies, there is usually some overlap as forensic
laboratories implement the new technology. Validation of the new methods is crucial to
maintaining high-quality results. The purpose of the information in this chapter is to briefly
review the historical methods mentioned above and to discuss the advantages and limitations
of each technology. By seeing how the field has progressed during the past few decades, we
can perhaps gain a greater understanding of where the field of forensic DNA typing is today
and where it may be headed in the future.

4.3.3 The Pre-DNA Years (1900 – 1985)


As mentioned, the use of DNA for forensic and human identification purposes began with the
work of Alec Jeffreys in the early 1980s. However, prior to DNA being available, forensic
laboratories utilized other genetic markers to try to assess whether or not someone could be
excluded as the contributor to evidence recovered from a crime scene. While these methods
may seem crude by today’s standards, it is important to keep in mind that they were the best
available at the time. Although the early methods had a very low power of discrimination, they
were still quite capable of excluding individuals who did not match when the question (Q) and
known (K) samples were compared.

4.3.3.1 Blood group testing


In 1900, Karl Landsteiner, an Austrian researcher at the University of Vienna, discovered that
blood would sometimes agglutinate (i.e., clump together) when blood from different people
was mixed together. His work, which resulted in the 1930 Nobel Prize in Medicine, eventually
identified four blood types: O, A, B, and AB. Although there is some fluctuation between
different population groups, generally speaking type O is observed about 43% of the time, type
A 42% of the time, type B 12%, and type AB 3%. As anyone who has received a blood
transfusion knows, the donor and recipient must have compatible ABO blood types to avoid
transfusion reactions, such as clumping of incompatible red blood cells, which can be fatal.
Blood groups are related to antigen polymorphisms present on the surface of red blood cells.
These antigens may be protein, carbohydrate, glycoprotein, or glycolipid differences that exist
between people. The antigens are inherited from an individual’s parents and therefore can be
used to track paternity. Antibody-based serological tests can be developed to decipher the
various blood group antigenic alleles. ABO blood types became the first genetic evidence used
in court. Leone Lattes, a professor at the Institute of Forensic Medicine in Turin, Italy,
developed methods for typing dried bloodstains with antibody tests for the ABO blood groups
and first used this genetic data in Italian courts starting around 1915. Over the next few decades,
the use of ABO typing in paternity disputes and forensic cases spread to England, other
countries in Europe, and the United States.
Fig: Blood type table showing phenotypic inheritance patterns of ABO blood group.

4.3.3.2 Forensic protein profiling


Amino acid sequences in some proteins vary among individuals in the human population. The
use of multiple protein polymorphisms can lead to modest powers of discrimination where the
chance of two unrelated people matching is one in several hundred. Prior to DNA testing
becoming available in the mid to late 1980s, protein profiling was performed in forensic
biology laboratories as a crude way to potentially distinguish Q from K samples. The human
red blood cell (erythrocyte) as well as blood serum has a number of isoenzymes. These
isoenzymes are multiple forms of a protein enzyme that can catalyze the same biochemical
reaction in spite of slightly different amino acid sequences. However, there are typically only
two or three forms of each isoenzyme, making them fairly poor at indicating differences
between people. Starch gel, agarose gel, and polyacrylamide gel electrophoresis were used
early on to separate these proteins into distinguishable alleles.

Table: lists a few of the isozymes used during the 1970s and 1980s to perform forensic
protein profiling
By the early 1980s many labs began using isoelectric focusing (IEF) polyacrylamide gel
electrophoresis, which is capable of higher resolving power than protein electrophoresis
because it produces sharper bands. Combined with silver staining (the same detection technique
used a decade later with D1S80 and early STR systems; see below), IEF can detect fairly low
levels of proteins in forensic samples. Nevertheless, proteins are not as variable as DNA nor
are they as stable in forensic evidence.

4.3.4 The First Decade Of DNA Testing (1985 – 1995)


Alec Jeffrey’s three Nature publications in 1985, showing that repeated DNA sequences in
humans could be used to track genetic inheritance and differentiate between people, are
considered by most people to mark the beginning of routine restriction fragment length
polymorphism (RFLP) DNA testing. A few years earlier Wyman and White had shown that an
RFLP marker was polymorphic and demonstrated Mendelian inheritance across three
generations, but it was Jeffreys’s work with variable number of tandem repeats (VNTRs) that
enabled RFLP DNA testing to take hold. The results with Jeffreys’s multi-locus probes looked
similar to bar codes. The patterns produced were termed DNA fingerprints because they were
so variable among the individuals tested that the patterns were thought to be unique. Realizing
the value to human identification work, Jeffreys teamed up with Peter Gill and David Werrett
of the British Home Office/Forensic Science Service to publish the first forensic application of
‘DNA fingerprints’ in December 1985. About this same time another technique for analyzing
DNA was being developed at Cetus Corporation near San Francisco, California. Kary Mullis
had invented the polymerase chain reaction (PCR) as a means to make copies of any desired
region of DNA. PCR enables sensitive detection that is needed for forensic applications.
However, in the mid-1980s, there were no polymorphic genetic marker systems characterized
yet for use with the amplification power of PCR. It would take several years before PCR-based
methods came to the forensic laboratory and the courtroom.

4.3.4.1 RFLP-Based DNA Testing


The original RFLP process often took several weeks to complete. First, a sample of blood or
some other biological material was collected. The DNA was then extracted from the cells by
breaking open the cell membranes and removing the protein packaging around the DNA. Next,
a restriction enzyme was added to cut the long, extracted DNA molecules into smaller pieces
much like a pair of scissors that was capable of finding and cutting specific DNA sequences.
The most commonly used restriction enzyme in the United States was Hae III, which
recognizes the sequence GG/CC. Every time ‘GGCC’ appears in the genome of the sample
being processed, two DNA fragments will be produced — one ending in ‘GG’ and the other
beginning in ‘CC’ (the reverse would be true on the other strand). The chopped-up DNA
fragments were then separated by fragment size on an agarose gel. Following the DNA size
separation, a Southern blot was performed. Here, the separated DNA fragments were
transferred from the gel to a nylon membrane by placing the membrane in contact with the gel.
By using a highly alkaline solution, the DNA strands were rendered single stranded. One of
the strands was then fixed to the membrane by crosslinking the DNA onto the membrane with
UV light (or by using a positively charged membrane). A radioactive or chemiluminescent
probe, which contained a VNTR sequence, was then allowed to hybridize to the DNA attached
on the nylon membrane at complementary sequence positions. Stringent binding occurred at
appropriate hybridization temperature and ionic strength — allowing the labelled probe to find
its complementary sequence with complete fidelity. Excess probe was then removed with
several washes of the membrane. Finally, the location of the probe was noted by placing the
membrane in contact with x-ray film. The resulting positions of the bands in the autoradiogram
were then recorded through scanning the ‘autorad’ and comparing band positions to relative
molecular mass markers analyzed in parallel. For each additional VNTR probe being detected,
the membrane would be stripped with alkaline solution washes to remove the previously bound
probe and the process of rehybridizing another probe would be repeated. Since it took several
days to a week to develop the banding patterns for each VNTR probe, the entire RFLP process
could involve a month or longer.

4.3.4.2 Multi-locus VNTR probes


In the original Jeffreys method, a single probe labelled multiple VNTR loci. These multi-locus
probes bound to various regions in the human genome under low hybridization stringency
conditions and generated a complex pattern resembling the bar codes used in supermarkets. As
noted above, this process was originally referred to as ‘DNA fingerprinting’ due to the ability
to individualize someone with the multi banded patterns. However, in forensic samples, where
there is a possibility of mixtures from multiple contributors, a multiple-locus system could
generate patterns that would be difficult to fully interpret. Thus, most RFLP typing moved to
single-locus probes, where only one (a homozygote) or two (a heterozygote) alleles were
detected at each VNTR locus.
4.3.5 Early PCR-Based DNA Testing
The polymerase chain reaction provides the capability to copy and label a specific DNA
sequence (or multiple sequences simultaneously) in order to make that sequence detectable.
Over the years, a number of PCR-based tests have been developed to help differentiate
individuals tested.

4.3.5.1 Polymerase Chain Reaction (PCR)


Kary Mullins invented the polymerase chain reaction (PCR) in 1983 and it has since become
one of the most powerful techniques in molecular biology. It is an enzymatic process that
enables a particular sequence of the DNA molecule to be isolated and amplified (copied)
without affecting the surrounding regions. This makes it very useful in forensic casework in
which DNA samples are frequently limited in both quantity and quality. For instance, PCR has
been applied to the identification of DNA from saliva residues on envelopes, stamps, drink
cans, and cigarette butts. It also has the advantages of being sensitive and rapid.
This molecular ‘Xeroxing’ process involves heating and cooling samples in a precise thermal
cycling pattern over ~30 cycles. During each cycle, a copy of the target DNA sequence is
generated for every molecule containing the target sequence. The boundaries of the amplified
product are defined by oligonucleotide primers that are complementary to the 3` ends of the
sequence of interest. In an ideal situation, after 32 cycles approximately a billion copies of the
target region on the DNA template have been generated. This PCR product, sometimes referred
to as an ‘amplicon’, is then in sufficient quantity that it can be easily measured by a variety of
techniques. PCR is commonly performed with a sample volume in the range of 5 to 100 μ L.
At such low volumes, evaporation can be a problem and accurate pipetting of the reaction
components can become a challenge. On the other hand, larger solution volumes lead to
thermal equilibrium issues for the reaction mixture because it takes longer for an external
temperature change to be transmitted to the center of a larger solution than a smaller one.
Therefore, longer hold times are needed at each temperature, which leads to longer overall
thermal cycling times. Most molecular biology protocols for PCR are thus in the 20- to 50- μ
L range. The sample is pipetted into a variety of reaction tubes designed for use in PCR thermal
cyclers. The most common tube in use with 20- to 50- μ L PCR reactions is a thin-walled 0.2-
mL tube. These 0.2-mL tubes can be purchased as individual tubes with and without attached
caps or as ‘strip-tubes’ with 8 or 12 tubes connected together in a row. In higher throughput
labs, 96-well or 384-well plates are routinely used for PCR amplification.
4.3.5.2 PCR components
A PCR reaction is prepared by mixing several individual components and then adding
deionized water to achieve the desired volume and concentration of each of the components.
Commercial kits with premixed components may also be used for PCR. These kits have greatly
simplified the use of PCR in forensic DNA laboratories. The most important components of a
PCR reaction are the two primers, which are short DNA sequences that precede or ‘flank’ the
region to be copied. A primer acts to identify or ‘target’ the portion of the DNA template to be
copied. It is a chemically synthesized oligonucleotide that is added in a high concentration
relative to the DNA template to drive the PCR reaction. Some knowledge of the DNA sequence
to be copied is required in order to select appropriate primer sequences. The other components
of a PCR reaction consist of template DNA that will be copied, dNTP building blocks that
supply each of the four nucleotides, and a DNA polymerase that adds the building blocks in
the proper order based on the template DNA sequence. Thermal stable polymerases that do not
fall apart during the near-boiling denaturation temperature steps have been important to the
success of PCR. The most commonly used thermal stable polymerase is Taq, which comes
from a bacterium named Thermus aquaticus that inhabits hot springs. When setting up a set of
samples that contain the same primers and reaction components, it is common to prepare a
‘master mix’ that can then be dispensed in equal quantities to each PCR tube. This procedure
helps to ensure relative homogeneity among samples. Also, by setting up a larger number of
reactions at once, small pipetting volumes can be avoided, which improves the accuracy of
adding each component (and thus the reproducibility of one’s method). When performing a
common test on a number of different samples, the goal should be to examine the variation in
the DNA samples, not variability in the reaction components used and the sample preparation
method.

4.3.5.3 Thermal cyclers


The instrument that heats and cools a DNA sample in order to perform the PCR reaction is
known as a thermal cycler. Precise and accurate sample heating and cooling are crucial to PCR
in order to guarantee consistent results. There are a wide variety of thermal cycler options
available from multiple manufacturers. These instruments vary in the number of samples that
can be handled at a time, the size of the sample tube and volume of reagents that can be handled,
and the speed at which the temperature can be changed. Prices for thermal cycling devices
range from a few thousand dollars to more than $10,000. Perhaps the most prevalent thermal
cycler in forensic DNA laboratories is the GeneAmp 9700 from Applied Biosystems (Foster
City, CA). The ‘9700’ can heat and cool 96 samples in an 8 – 12 well microplate format at a
rate of approximately 1°C per second. The 9700 uses 0.2-mL tubes with tube caps. These tubes
may be attached together in strips of 8 or 12, in which case they are referred to as ‘strip-tubes’.
Alternatively, plastic ‘plates’ containing 96 ‘wells’ can be used that are covered or sealed once
the PCR reaction mix has been added. Some work has also been performed with PCR
amplification off of sample spots on microscope slides Thermal cyclers capable of amplifying
384 samples or more at one time are now available. The Dual 384-well GeneAmp ® PCR
System 9700 can run 768 reactions simultaneously on two 384-well sample blocks. Thermal
cyclers capable of high sample volume processing are valuable in production settings but are
not widely used in forensic DNA laboratories.

Fig: Photograph of a GeneAmp 9700 thermal cycler

4.3.5.4 Quantitative (Real Time) PCR


This technique is based on the PCR process and is designed to both quantify and amplify the
targeted DNA. Two of the commonest means of quantification are the inclusion of a fluorescent
dye (e.g.,SYBR ® Green) in the PCR reaction that intercalates with the DNA as it is produced
and the TaqMan ® assay. In the intercalation assay, the binding of SYBR ® Green to double
stranded DNA results in an increase in fluorescence, therefore, as more amplicons are produced
the greater the fluorescence detected. Because the dye binds to any double stranded DNA
molecule the intercalation method is non - specific (i.e. it will not distinguish between DNA
molecules). In the TaqMan ® assay oligonucleotide probes (TaqMan ® probes) are added that
bind to a specific internal region of DNA between the forward and reverse PCR primers. The
probes have a ‘reporter’ dye attached to their 5′ end and a ‘quencher’ dye attached to their 3′
end. When a high energy dye (reporter dye) is in close proximity to a low energy dye (quencher
dye) there is a transfer of energy from the high energy dye to the low energy dye – this is what
happens in the intact probe and it results in the fluorescence of the reporter dye being very low
or not detectable. When, during the PCR process the Taq DNA polymerase replicates a
template on which a TaqMan ® probe is bound the enzyme (which has 5′ - nuclease activity)
splits the probe thereby separating the reporter dye and the quencher dye – hence fluorescence
of the reporter dye is increased whilst the fluorescence of the quencher dye decreases. The
method is extremely sensitive and can detect as little as a twofold increase in the level of a
DNA sequence. Because custom – designed primers are used this method is more specific than
the intercalation method but, in both cases, with each cycle of the PCR process more DNA is
produced and this is measured as an increase in fluorescence. The DNA product is therefore
‘quantified’ as it accumulates in ‘real time’ and hence the terms ‘real time’ and ‘quantitative’
PCR (various abbreviations are used including qPCR, RT - PCR, and QRT - PCR). Once
sufficient DNA is produced it can be sequenced or used for Southern Blotting.

4.3.5.5 Short Tandem Repeat (STR) markers


Short tandem repeats (STRs), also referred to as ‘microsatellites’ or ‘simple sequence repeats
(SSRs)’, are brief lengths of the non - coding region of the human genome consisting of less
than 400 base pairs (hence ‘short’) in which there are 3 – 15 repeated units, each of 3 – 7 base
pairs (hence ‘tandem repeats’). These STR sequences, or ‘markers’, can be divided into three
categories: ‘simple’, ‘compound’ and ‘complex’. Simple STRs are those in which repeats are
of identical length and sequence units. Compound STRs consist of two or more adjacent simple
repeats whilst complex STRs have several repeat blocks of different unit length and variable
intervening sequences. STRs occur on all 22 pairs of autosomal chromosomes and the X and
Y sex chromosomes. STRs can vary greatly between individuals, this diversity resulting from
the effects of mutation, independent chromosomal variation and recombination. However,
STRs found on the Y chromosome exhibit less diversity than those on other chromosomes
because they do not undergo recombination. Consequently, STR diversity on Y chromosomes
results solely from mutation. There are over 2000 STR markers suitable for genetic mapping
but only a few of them are used routinely for forensic DNA profiling. In the UK, the Forensic
Science Service (FSS) currently uses the SGMplus™ system (SGM+) that utilizes 10
autosomal STR markers whilst in America the FBI uses 13 markers. The DNA profiles are then
stored on computer databases – in the UK this is the NDNAD whilst in America it is the
Combined DNA Index System (CODIS). The use of a standard set of markers and
computerized systems facilitates comparisons between the DNA profiles of suspects, convicted
offenders, unsolved crimes and missing persons. If all 10 (or 13) STR loci in two DNA samples
are found to have identical lengths then this is compelling evidence that they originated from
the same person. Commercial testing kits are available for these loci and the kits also include
a marker at the amelogenin locus to enable sex determination. Amelogenin is a substance
involved in the organization and biomineralization of enamel in developing teeth. In humans,
the gene is expressed on both sex chromosomes but that on the X chromosome is six base pairs
shorter than that on the Y chromosome. Consequently, following PCR and electrophoresis,
males being heterozygous (XY) express two peaks (or bands) whilst females, being
homozygous (XX) express a single peak. The test is not full proof and problems can arise if
there is a deletion of the amelogenin gene on the Y chromosome – an important consideration
in some ethnic groups, such as Malay and Indian populations.
Because STR analysis relies on the identification of sequences that are much shorter than those
required for RFLP analysis, it is less vulnerable to problems associated with DNA degradation.
STR analysis can therefore be effective on older body fluid stains or corpses at a later stage of
decomposition than would be the case with RFLP analysis. However, DNA degradation can
still present difficulties.
For example, peak heights may be reduced thereby making it difficult to distinguish them from
background ‘noise’. Indeed, some peaks may disappear entirely whilst others remain visible
thereby resulting in an inaccurate profile. Evidence of DNA degradation is often exhibited by
a progressive decline in peak height with increasing sequence length. This is because longer
sequences are more vulnerable to the effects of degradation. If the profile contains DNA from
more than one individual, this problem can be exacerbated because the two (or more) DNA
samples may not degrade at the same rate or in an identical manner. If one of a pair of alleles
fails to be recorded, this is referred to as ‘allele dropout’. Consequently, a heterozygous
individual may appear homozygous at one or more gene loci. Similarly, an additional allele
may be observed – this is referred to as ‘allele drop - in’. Allele drop – in results from
contamination and becomes obvious when the allele does not appear in repeated independent
PCR reactions.
The shortness of the STR markers means that only small amounts of DNA are required
(although if the amounts are extremely small, problems can arise). The marker sequences can
be easily amplified using PCR and their shortness also reduces the risk of differential
amplification. Because PCR amplification occurs in a non - linear manner, reproducibility is
affected by stray impurities and the shorter the sequence, the less risk there is of this occurring.
Moving the forward and reverse primers in as close as possible to the STR sequence can further
reduce the size of the STR amplicons. This procedure is known as mini-STR analysis. Butler
and his research group have produced a set of mini-STR primers that allows the maximum
reduction in size for all 13 CODIS STR loci and also several of those used in commercial STR
kits. This approach is useful where there are problems with conventional STR analysis through
allele dropout and reduced sensitivity of larger STR alleles.

4.3.5.6 Y- short tandem repeat markers


Over 200 STR markers have been identified on the human Y chromosome. Between nine and
eleven of these markers are used routinely in forensic science and commercial kits are currently
available for at least six of them. They are particularly useful in cases of rape and sexual assault
where there are mixed male and female DNA profiles and therefore separating the two is a
major challenge. Unlike conventional STR analysis, there is typically only one peak or band
for each STR type in Y - STR analysis and these can only originate from DNA from a male. In
the case of multiple sexual assaults more peaks will be found, depending upon the number of
men involved. The simultaneous detection of multiple Y - STR loci produces additional genetic
information without consuming additional DNA. A further advantage of Y - STR analysis is
that it enables DNA profiles to be made in cases of sexual assault in which the man did not
produce sperm owing to a medical condition or being vasectomized. In these circumstances,
the absence of sperm would mean that only a very small amount of male autosomal DNA would
be present and the female’s autosomal DNA would swamp this. By specifically targeting the
Y chromosome STR markers it would be possible to target the minute amount of male DNA
that would be present. Because Y - STR markers exhibit lower variability than autosomal STR
markers, their discriminatory power is much less and unless a mutation occurs, all male
relatives – sons, fathers, brothers etc. will share the same profile. This needs to be taken into
account when assessing the strength of the evidence and could be a major problem when the
suspect comes from an inbred population or a criminal family. However, like other DNA
profiling techniques, their value can be as great in excluding suspects as in identifying a culprit.

4.3.5.7 DNA boost ™


DNAboost ™ is a technique developed by the Forensic Science Service (FSS) that applies
computer analysis to DNA profiles. Very little published information was available about how
exactly the method works at the time of writing but it is said to facilitate the analysis of mixed
profiles and those in which DNA levels are low and/or degraded.
4.3.5.8 Single Nucleotide Polymorphism markers (SNP)
Single nucleotide polymorphisms (SNPs) arise from differences in a single base unit and are
the commonest form of genetic variation. They are found throughout the genome, including
the X and Y sex chromosomes. Everyone has their own distinctive pattern of SNPs and this,
therefore, provides a means of identification. Using a technique called mini - sequencing, the
base at a given SNP can be determined and once the bases at several sites at different loci are
known one can produce a profile similar to that of an STR profile. Using allele frequencies for
each SNP, the likelihood of two persons sharing the same SNP profile can be estimated.
Because the maximum number of alleles at each site is only 4 (A, C, G or T) 50-100 SNPs
need to be examined to achieve the same discriminatory power as STR - based profiling.
However, a process called microarray hybridization allows numerous SNP loci to be examined
simultaneously, so it does not take long. The stability of SNPs, compared with STRs, means
that they are less likely to be lost between generations and they are sometimes used in paternity
cases. However, a statistical simulation study by Amorim & Periera (2005) indicated that
relying exclusively on SNP analysis would result in more inconclusive results than STR
analysis.
Because SNP analysis requires minute quantities of sample and the segment size can be even
smaller than that needed for STR analysis, the technique can provide information even when
the DNA is severely degraded. However, its effectiveness is compromised in mixed DNA
samples because it could be difficult to distinguish which SNP belonged to which person.
Furthermore, a quantitative test would be required in this context and this is not possible with
some of the current SNP assays. There would be less of a problem if the mixture were
composed of DNA from a single male and a single female because Y – linked SNPs could only
originate from the male. However, if more than one male could have contributed to the DNA
sample or the sexes were the same (e.g.,male rape), their separation could be difficult. A
possible solution to this problem would be to identify tri - allelic SNPs, although according to
Brookes (1999) these are ‘rare almost to the point of non - existence’.

4.3.6 The Second Decade of DNA Testing (1995 – 2005)

In the mid-1990s, there were a number of different DNA testing methods under evaluation and
in active use by forensic DNA laboratories. In April 1995, the United Kingdom launched their
national DNA database with six STR markers using fluorescence detection — and thus led the
way to where the world is today in terms of standardized sets of STR markers. Within the
United States, RFLP with single-locus probe VNTRs was still performed by the FBI
Laboratory (and would be until 2000) as well as a number of other forensic labs. Many labs
had also adopted and validated the reverse dot blot DQA1/PolyMarker system. Still others were
using D1S80 and triplex STR assays with silver-stain detection methods. After a few years of
using silver-stain detection, most labs around the world converted to STR markers with
fluorescence detection.

4.3.6.1 Silver-staining STR kits


Although not as commonly used today, silver-staining procedures were used for the first
commercially available STR kits from the Promega Corporation. Promega still supports silver-
stain gel users although most of their customer base now uses fluorescent STR systems. Silver-
stain detection methods are still quite effective for laboratories that want to perform DNA
typing for a much smaller start-up cost. No expensive instruments are needed — simply a gel
box for electrophoresis and some silver nitrate and other developing chemicals.
Silver staining is performed by transferring the gel between pans filled with various solutions
that expose the DNA bands to a series of chemicals for staining purposes. First, the gel is
submerged in a pan of 0.2% (mass concentration) silver nitrate solution. The silver binds to the
DNA and is then reduced with formaldehyde to form a deposit of metallic silver on the DNA
molecules in the gel. A photograph is then taken of the gel to capture images of the silver-
stained DNA strands and to maintain a permanent record of the gel. Alternatively, the gels
themselves may be sealed and preserved. Silver staining is less hazardous than radioactive
detection methods used with RFLP analysis although not as convenient as fluorescence
methods. Most reagents for silver staining are relatively harmless and thus require no special
precautions for handling. The primary advantage of silver staining is that the technique is
inexpensive. The developing chemicals are readily available at low cost. The PCR products do
not need any special labels, such as fluorescent dyes. The staining may be completed within
half an hour and with a minimal number of steps. Sensitivity is approximately 100 times higher
than that obtained with ethidium bromide staining. However, a major disadvantage to data
interpretation is that both DNA strands may be detected in a denaturing environment leading
to two bands for each allele. In addition, only one ‘color’ exists, which makes PCR product
size differences the only method for multiplexing STR markers.
4.3.6.2 Fluorescent detection STR kits
The capability of simultaneously detecting STR alleles in the same size range was enabled by
labelling the potentially overlapping PCR products with different colored fluorescent dyes.
Some of the early fluorescent STR multiplex systems developed in the mid-1990s. In 1991, Al
Edwards and Tom Caskey from Baylor College of Medicine published the first article on
fluorescent detection of STR markers. Edwards and Caskey patented their work and later
licensed it to Promega Corporation and Applied Biosystems. Caskey’s early work was followed
up by the Forensic Science Service, the Royal Canadian Mounted Police, and FBI Laboratory
publications involving fluorescent STR analysis. One of the first STR multiplexes to be
developed was a quadruplex created by the Forensic Science Service that comprised the four
loci TH01, FES/FPS, VWA, and F13A1. This so-called ‘first-generation multiplex’ sometimes
termed the British Home Office Quadruplex had a matching probability of approximately 1 in
10,000. The FSS followed with a second-generation multiplex (SGM) made up of six
polymorphic STRs and a gender identification marker. The six STRs in SGM were TH01,
VWA, FGA, D8S1179, D18S51, and D21S11. Results from these loci provided a matching
probability of approximately 1 in 50 million. Today multiplex PCR assays capable of 15 or
more STRs are used routinely by forensic laboratories to produce DNA profiles with matching
probabilities of 1 in a trillion or greater.

4.3.7 Mitochondrial DNA

Mitochondria are intracellular organelles that generate about 90% of the energy that cells need
to survive. The numbers of mitochondria found in a human cell depend upon its energy needs
and vary from zero in the mature red blood cell to over 1000 in a muscle cell. They are thought
to descend from bacteria that evolved a symbiotic relationship with pre - or early eukaryotic
cells many hundreds of millions of years ago. With time, the symbiotic relationship became
permanent but the legacy is reflected by present day mitochondria retaining their own bacterial
type ribosomes and their own DNA (referred to as mtDNA) that is distinct from that found in
the cell nucleus. Each mitochondrion contains between 2 and 10 copies of the mtDNA genome.
The inheritance of mtDNA also differs from nuclear DNA in that it is exclusively generated
from the maternal side. This is because the sperm head is the only bit of a spermatozoon that
enters the egg at the time of fertilization. Usually, the spermatozoon’s tail and the mid piece
(which is the only bit containing mitochondria) shear off as the head enters the egg’s
perivitelline space. Occasionally, a few mid piece mitochondria are incorporated at fusion but
these are subsequently destroyed by the egg. Consequently, the only mtDNA present in the
developing embryo is that derived from the egg and therefore as usual (it is alleged) the
workforce is exclusively female.

Human mtDNA is a circular DNA molecule that contains 16 569 base pairs that code for 37
genes that in turn code for the synthesis of two ribosomal RNAs, 22 transfer RNAs and 13
proteins. Unlike nuclear DNA, the mitochondrial genome is extremely compact and about 93%
of the DNA represents coding sequences. The remaining, non - coding region is called the
control region or displacement loop (D - loop). The D - loop region consists of about 1100 base
pairs and it exhibits a higher mutation rate than the coding region and about 5 – 10 times the
rate of mutation within nuclear DNA. The mutations occur as substitutions in which one
nucleotide is replaced by another one: the length of the loop region is not changed. The
mutations result from mtDNA being exposed to high levels of mutagenic free oxygen radicals
that are generated during the mitochondrion’s energy generating oxidative phosphorylation
process. The substitutions persist because mtDNA lacks the DNA repair mechanisms that are
found in nuclear DNA. These mutations result in sequence differences between even closely
related individuals and makes analyses of the D - loop region an effective means of
identification.
Because the mtDNA is inherited only from the mother, it also allows tracing of a direct genetic
line. Furthermore, unlike the inheritance of nuclear DNA, there are no complications owing to
recombination.
The D - loop is divided into two regions, each consisting of about 610 base pairs, known as the
hypervariable region 1 (HV1) and hypervariable region 2 (HV2). It is these two regions that
are normally examined in mtDNA analysis by PCR amplification using specific primers
designed to base pair to their ends. This is then followed by DNA sequence analysis. Because
of the high rate of substitutions, it is possible to analyse just these short regions and still
differentiate between closely related sequences. It has been estimated that mtDNA may vary
about 1 – 2.3% between unrelated individuals and although mtDNA sequencing does not have
the discriminating power of STR DNA profiling, it can prove effective where STR DNA
analysis fails.
The mtDNA sequence of all the mitochondria in any one individual is usually identical – this
condition is referred to as ‘homoplasmy’. However, in some people, differences in base
sequences are found at one or more locations. These differences arise from them containing
two or more genetically distinct types of mitochondria. This condition is known as
‘heteroplasmy’ and it can have a significant impact in forensic investigations. Heteroplasmy
used to be considered relatively rare but it is now believed to occur in 10 – 20% of the
population. To make matters worse, it is now apparent that heteroplasmy is not necessarily
expressed to the same extent in all the tissues of the body. For example, two hairs from a single
person might have different proportions of the base pairs contributing to the heteroplasmy and
this might result in an exclusion rather than a match. This is because heteroplasmy may result
from the high mutation rate or from either inheritance at the germ line level or the level of
somatic cell mitosis and mtDNA replication. In forensic science, mtDNA analysis is most
frequently used where the samples do not contain much nuclear DNA – for example, a
fingerprint or a hair shaft – or where the DNA has become degraded through the decomposition
process or burning. Because there are numerous mitochondria in a single cell and each
mitochondrion contains multiple copies of the mitochondrial genome, it is possible to extract
far more mtDNA than nuclear DNA. Epithelial cells, which are the commonest cell type used
in forensic casework, contain an average of 5000 molecules of mtDNA. Mitochondrial DNA
analysis does, however, suffer from a number of problems. For example, all maternally related
individuals are likely to have the same mtDNA sequences, so the discriminating powers are
limited compared with autosomal STR analysis. Heteroplasmy can be considered either a
problem or a useful trait depending on the circumstances. It can create problems because the
mixed sequence is also what would be expected if there were more than one individual
contributing to the DNA profile. A difference of only one base pair between the mtDNA profile
of the sample and the suspect is considered insufficient to prove either a match or exclusion
whilst a difference in two or more base pairs is grounds for exclusion. By contrast,
heteroplasmy can provide an identifying characteristic where the suspect expresses the same
heteroplasmy characteristics as the sample.
Other common problems associated with mtDNA analysis are that detecting differences in
sequences is more time consuming and costly than determining differences in lengths – as is
accomplished using STR analysis. In addition, the rarity of mtDNA sequences has to be
determined by empirical studies and the results are not as statistically reliable as those for other
types of analysis. Finally, owing to the high copy number per cell there is always a risk of
contamination and cross - contamination associated with mtDNA sequencing.
4.3.8 DNA databases
The National DNA Database (NDNAD) was established in April 1995 following a
recommendation from the Royal Commission on Criminal Justice in 1993. Scotland and
Northern Ireland have their own DNA databases but export the profiles of all persons they
arrest to the NDNAD. The NDNAD is governed by a combination of the Home Office, the
Association of Chief Police Officers and the Association of Police Authorities and with invited
representatives from the Human Genetics Commission. It was the first national DNA database
to be established in the world and in 2007 it contained over 4 million profiles. This represents
about 6% of the UK population although about 10 – 13% of the profiles are thought to be
replicates (e.g., through people using aliases). The FBI’s DNA database, CODIS, contains
numerically more DNA profiles but represents only about 0.5% of the US population. The
NDNAD is run by the Forensic Science Service (FSS) under contract from the Home Office
and the FSS is the main organization that loads profiles onto the database, undertakes profile
searches and matches and reports back to the police authorities.

Information stored on individuals on the NDNAD in 2008:


(1) Name, gender, date of birth and ethnic appearance as described by the arresting officer this
latter information can be of dubious authenticity.
(2) Type of DNA sample used (e.g., mouth swab, blood, hair, semen)
(3) Type of DNA test employed (e.g., STRplus™)
(4) DNA profile: in the case of STRplus™ this would consist of a string of 20, two-digit
numbers and the amelogenin sex indicator
(5) Data on the police force that collected the sample
(6) Arrest summons number: this provides a link to the Police National Computer which is
stores criminal record and police intelligence information
(7) A unique bar code that identifies the record and provides a link to the stored DNA sample

4.3.8.1 Combined DNA Index System (CODIS) software


As noted previously, CODIS stands for the Combined DNA Index System and represents the
software used to connect laboratories housing U.S. DNA data at the local, state, or national
level in LDIS, SDIS, or NDIS. These U.S. sites are all networked together on the CJIS WAN
(Criminal Justice Information Systems Wide Area Network), a stand-alone law enforcement
computer network that operates in a similar fashion to the Internet. The software is the same at
all sites with various configurations that permit different levels of access (LDIS, SDIS, or
NDIS). Software versions are updated periodically and provided to all CODIS laboratories.
As of December 2008, CODIS software was installed in 180 U.S. laboratories representing all
50 states as well as the FBI Laboratory, U.S. Army Crime Lab, and Puerto Rico. This software
enables NDIS participating laboratories to submit DNA profiles for the 13 CODIS core STR
loci to the U.S. national DNA database. Of the 180 sites where CODIS software is installed,
127 are LDIS and 53 are SDIS (the FBI is the only NDIS site). The FBI has also provided a
stand-alone version of the CODIS software to 44 laboratories in 30 different countries to aid
the DNA database work in these countries. These countries include Belgium, Bosnia,
Botswana, Canada, Cayman Islands, Chile, Colombia, Croatia, Czech Republic, Denmark,
Estonia, Finland, France, Greece, Hong Kong, Hungary, Iceland, Israel, Italy, Latvia,
Lithuania, Netherlands, Norway, Poland, Portugal, Singapore, Slovakia, Spain, Sweden, and
Switzerland.

4.4 Case Law based on DNA evidence

4.4.1 Case Study: The identification of Colin Pitchfork


The first and most famous case of DNA profiling in a forensic investigation involves that of
Colin Pitchfork, who in 1988 was sentenced to life in prison for the rape and murder of two
young girls in the town of Narborough, Leicestershire. The first girl was raped and murdered
in 1983 and a semen sample recovered from her body indicated that the culprit had blood group
A and an enzyme profile shared by only 10% of the male population. However, with no further
leads, the investigation never progressed any further. In 1986, another girl was raped and
murdered in the same area and the semen sample showed the same characteristics. These
findings led the police to believe that both murders were the work of the same man and that he
lived in the vicinity. A local man, Richard Buckland was subsequently arrested and whilst he
confessed to the second girl’s murder, he denied having anything to do with the first girl’s
death. The police were convinced that they had arrested the correct person and asked Dr Alec
Jeffries of Leicester University for help as he had developed a means of creating DNA profiles.
Dr Jeffries’ findings indicated that, whilst both girls were raped and murdered by the same
man, this could not have been Richard Buckland. This emphasizes the need for caution even
when a suspect confesses to a crime. With their chief suspect exonerated (another first for DNA
profiling), the police undertook the world’s first DNA mass intelligence screening in which all
the men in the area, 5000 in all, were asked to provide DNA either as a blood sample or a saliva
swab. Of these samples, only those exhibiting the same blood group and enzyme pattern as the
murderer were subjected to DNA profiling.
This was a major operation, not least because the profiling techniques were much more time
consuming than those in use today, and took 6 months to complete. At the end of this time, the
operation drew a blank – none of the profiles matched those of the murderer. There followed a
lull of about a year until a woman reported that she had overheard a man saying that he had
provided a DNA sample in place of his friend Colin Pitchfork. Colin Pitchfork, who was a local
baker, was therefore arrested and his DNA profile was found to match those of the semen
samples recovered from the two murdered girls.

4.4.2 Case Study: Determination of paternity from aborted chorionic villi


Paternity testing is seldom a problem after a child is born or even before birth if the baby is
well developed; however, if only aborted material from an early stage of pregnancy is available
it is more difficult. This is because the maternal DNA is so much more abundant than the foetal
DNA that it is preferentially replicated in the PCR analysis. Robino et al. (2006) have described
an interesting case in which they successfully overcame this problem. The case began when a
severely mentally and physically disabled 21 - year - old woman was brought to the emergency
room of a hospital suffering from vaginal bleeding. Whilst there she passed a large blood clot
and subsequent histological examination of this revealed the presence of chorionic villi – in
short, she had suffered an abortion at an early stage of pregnancy. The chorion is an extra-
embryonic tissue layer that is formed by the embryo and ultimately becomes the main
embryonic part of the placenta. As the chorion develops it forms finger – like projections called
chorionic villi that grow into the mother’s decidua basalis – a part of the endometrium of the
uterus that is shed after birth. The woman was mentally incapable of giving consent to sex and
therefore the pregnancy must have resulted from sexual assault. A criminal investigation was
therefore instigated. Her condition meant that she was also unable to indicate who had assaulted
her so the culprit could only be identified from DNA analysis although standard protocols could
not be employed for the reasons outlined above. The authors used laser microdissection to
isolate tissue from the foetal chorionic villi without including the surrounding maternal tissue
and took blood from the woman to act as a reference sample. All the samples were then
subjected to STR analysis for 15 loci and the Amelogenin locus for sex determination. The
results demonstrated that the foetus was female and that its profile was sufficiently different
from the women to indicate that the foetal tissue was successfully isolated whilst sharing one
allele in common with the woman at each locus thereby indicating that she was the mother.
Furthermore, eight loci in the foetus were homozygous and this strongly indicated that the
woman was assaulted by someone genetically closely related to her. Consequently, blood
samples were taken from the woman’s brother, father and maternal and paternal grandfathers
and subjected to the same STR analysis. The results also indicated that there were sufficient
incompatibilities between the alleles of the woman’s father and grandfathers for them to be
excluded as potential culprits. However, the woman’s brother provided an appropriate match
for each of 15 loci on the foetus’s STR profile and the probability that he was responsible was
judged to be extremely high.

4.4.3 Case Study: The murder of Lynette White


In the early hours of February 14th 1988, Lynette White, a 20 - year - old prostitute was brutally
murdered in her flat in the Cardiff docklands red light district. Her throat was cut, her wrists
and face slashed, and she was stabbed over 50 times – particularly around her breasts. The
crime scene had all the hall marks of a frenzied sex attack and resulted in a great deal of media
coverage and a major police operation (the two are not uncommonly connected). The location
and time of Lynette’s death were not in doubt – the bloodstains indicated that she died where
she was found and her watch, which was damaged during the attack, indicated that this occurred
between 1.45 and 1.50 am (concurrence evidence). Furthermore, the stains on the walls and
Lynette’s clothing included blood from someone other than Lynette – it is not unusual for
attackers to injure themselves. DNA profiling was still in its infancy at the time and most of
the analysis was therefore based on serology. Together they indicated that the blood originated
from a man with a rare combination of five blood group characteristics. Witnesses in the
dockland area at the time of the attack stated that they had seen a white man with bloodstained
clothes acting in a distressed manner. From their descriptions a known paedophile referred to
in the published accounts only as Mr X was identified. He remained the main suspect until
DNA profiling excluded him. With the main suspect exonerated, the police became desperate
for a result. They proceeded to bully a confession out of Lynette’s pimp, Stephen Miller, and
got him to implicate others. Miller was 26 years old at the time but with an IQ of 75 and the
mental age of 11 he was described as being ‘vulnerable’. Unknown to him, he had an alibi that
would have made it physically impossible for him to have killed Lynette – the police were
aware of this but chose not to take it into account. The police also charged Yusef Abdullahi
(who was at work on a ship at the time of the murder), Tony Paris and two other men. All five
men were black and known to the police. The main evidence against them was Miller’s
‘confession’ and the statement of two prostitute friends of Lynette that they had seen the five
men murder her – although it allegedly took 48 hours in police custody for them to volunteer
this information. The trial took place two years after the crime was committed and both it and
the presentation of some of the forensic evidence were flawed. Miller, Abdullahi and Paris
were found guilty of murder and jailed for life whilst the other two men were acquitted.
Subsequently, the convictions of the ‘Cardiff Three’ as they came to be called were quashed at
appeal and the police were left to start their investigation again. Over the following years new
methods of DNA analysis were developed but re - analysis of the stored material failed to yield
usable profiles.
In June 1999, the case was re - opened and passed to an independent forensic science laboratory
– the Forensic Alliance – headed by Dr Angela Gallop. By this time few blood stains were left
for analysis. Fortunately, they found that a scrap of bloodstained cellophane cigarette wrapping
that was retrieved from beside Lynnette’s body included one small spot with a different profile
to hers. This was not sufficient to prove that the blood belonged to the murderer – for that they
would need to find the same profile elsewhere in the flat in circumstances that would indicate
an association with the murder. Although the murderer had left a trail of bloody handprints
along the hallway as he left the building the strips of stored stained wallpaper failed to yield a
usable DNA profile – probably because the use of fingerprint sprays on them had degraded the
DNA. It was now over ten years since Lynette was murdered and her flat had been cleaned and
redecorated twice so the chances of recovering evidence from the crime scene appeared remote.
Nevertheless, in 2003 the flat was examined again and layers of paint carefully removed from
the front door and the skirting board close to where her body was discovered. Tiny traces of
blood were found and these included one that had DNA profiles of both Lynette and the man
whose blood was found on the cellophane. Lynette had only used the flat for a week before she
was murdered so there was a low probability of the profiles being deposited on top of one
another at two different dates. Photographic records and knowledge of how bloodstain patterns
are formed helped the investigators to focus their search. For example, many of the wounds
were inflicted whilst Lynette was lying on the floor and the frenzied nature of the attack meant
that numerous cast – off stains would be projected onto the lower part of the walls and leak
beneath the skirting board. In addition, the attacker would be covered in blood and, being
wounded himself, would leave transfer stains of his own and Lynette’s blood on the walls and
doors as he fled the scene. Lynette’s clothing was also re - examined and bloodspots with the
same male DNA profile as that found elsewhere were found. A link was therefore made – the
suspect’s blood had been recovered from Lynette’s clothing, mingled with her blood, and from
near to her body: he must have been bleeding on and around Lynette at or around the same
time that she was killed. All of those originally charged and also the ‘witness’s provided DNA
samples and were quickly excluded from the investigation. The suspect’s DNA profile was
also entered onto the NDNAD but no match was found. The comparison was therefore
restricted to men living in the Cardiff area who expressed particular features of the suspect’s
DNA profile. By concentrating on the rarest component of the profile, the list of candidates
was reduced to 600 names. The list was then reduced to 70 by comparing common components
within profiles and ultimately to one individual whose profile was very similar to that of the
suspect. This individual was a teenager who wasn’t even born at the time of murder but there
was a good chance that he was related to the murderer. The police therefore obtained DNA
samples from all the boy’s relatives and ultimately a cousin of his, Jeffrey Gafoor was found
to have an identical profile to the suspect. He was arrested and admitted that he had initially
paid Lynette £ 30 for sex but after arriving at the flat he changed his mind and demanded his
money back this resulted in an argument, Gafoor then drew a knife and, in his words, ‘lost it’.
He was jailed for life in 2003. This case indicates how easily miscarriages of justice can arise,
how DNA evidence can be recovered many years after the event and how a person can be
identified using the NDNAD even if their DNA isn’t on it.

4.4.4 Case Study: The manslaughter of Michael Little


Michael Little was a lorry driver who worked for the Ford Motor Company and at about 12.30
am on 21st March 2003 he was driving his truck along the M3 motorway when his windscreen
was shattered by a brick hurled from a footbridge across the road. The brick struck Mr. Little
in the chest but he was able to steer his lorry to the side of the road and switch off the engine
before he died of heart failure. The police believed that whoever threw the brick was probably
local since the footbridge connected Camberley and Frimley (in Surrey, UK), and was often
used by residents after a night out. Forensic analysis of the brick revealed a bloodstain that
yielded a mixed DNA profile for Mr Little and an unknown other man. Because of the small
amount of DNA available from the unknown man it was analysed using low copy number DNA
analysis and this produced a partial DNA profile. This partial profile was then shown to match
the full profile of a DNA sample recovered in a vandalized car in Frimley. Earlier in the
evening, someone had attempted to steal the car by smashing one of its windows and hotwiring
it. The attempted theft failed so the car was pushed into a hedgerow and abandoned. However,
in breaking the window the would - be thief hurt his hand and left blood behind in the vehicle.
It subsequently transpired that the thief had a partner and they had attempted to steal the car on
their way home after a night out drinking. Frustrated in their attempts to steal the car they
proceeded to walk home and along the way they each picked up a house brick from a driveway
and these were subsequently thrown from the footbridge. The DNA profiles recovered from
the brick that struck Mr Little and the car did not match any of those on the NDNAD but from
the nature of the crime and the DNA profile characteristics the police believed they were
looking for a white male who was probably under the age of 35 who lived locally. They
therefore requested men who fitted this description (and who were willing) to donate DNA but
after analysing 350 samples they were no closer to an answer. A familial search was therefore
made on the NDNAD with the criteria limited to persons living in the Surrey/Hampshire
regions who shared common profile characteristics with the unknown man’s DNA. This
yielded a list of 25 names but one of these stood out because it matched the profile of the
unknown man in 16 out of 20 areas, thereby suggesting that they could be related.
Consequently, DNA samples were requested from his relatives – this led to the identification
of 20 - year - old Craig Harman and he was arrested on 30 October, 2003. Harman was initially
charged with murder, attempted theft of a car and the theft of two house bricks. He was
ultimately jailed for 6 years for manslaughter. This case was somewhat unusual because there
was no direct contact between Little and Harman and also indicates the speed with which an
identification can be made even in difficult cases provided a good DNA profile is obtained.

4.4.5 Applications of DNA Typing


Besides its use in criminal investigations, DNA data plays an important role in other
applications such as parentage and kinship testing where DNA results from potential relatives
are being compared. Different questions are usually being asked in parentage testing than in
criminal casework where a direct match is being considered between evidence and suspect.
Several applications exist that involve DNA evidence from related individuals. These include
traditional parentage testing that usually involves addressing questions of paternity (i.e., who
is the father?) and missing persons and mass disaster investigations that involve reverse
parentage analysis (i.e., could these sets of remains have come from a child of these reference
samples?). Immigration cases also involve kinship testing to determine if an individual could
have a proposed relationship to reference samples.
4.4.5.1 Parentage Testing
Every year in the United States more than 300,000 paternity cases are performed where the
identity of the father of a child is in dispute. These cases typically involve the mother, the child,
and one or more alleged fathers (D.N.A. Box 17.1). In 2008, almost 1 million samples were
analyzed for this purpose in the United States. Several dozen DNA laboratories have been
accredited by the American Association of Blood Banks (AABB) to perform parentage testing.
The determination of parentage is made based on whether or not alleles are shared between the
child and the alleged father when a number of genetic markers are examined. Thus, the outcome
of parentage testing is simply inclusion or exclusion. Paternity testing laboratories often utilize
the same short tandem repeat (STR) multiplexes and commercial kits as employed by forensic
testing laboratories. However, rather than looking for a complete one-to-one match in a DNA
profile, the source of the nonmaternal or ‘obligate paternal allele’ at each genetic locus is under
investigation. The basis of paternity comes down to the fact that, in the absence of mutation, a
child receives one allele matching each parent at every genetic locus examined. Thus, parents
with genotypes 11,14 (father) and 8,12 (mother) may produce offspring with the following
types: 8,11 8,14 11,12 and 12,14. Conversely, if the mother’s genotype is known to be 8,12
and the children possess alleles 8, 11, 12, and 14, then we may deduce that their father
contributed alleles 11 and 14 — but this does not necessarily mean that one man fathered all
of the children. In this particular example, the parents had nonoverlapping alleles. Paternity
testing becomes more complicated when mother and father share alleles, but the logic remains
the same in calculating exclusion probability and the paternity index likelihood ratio described
below.
The paternity index (PI) is the ratio of two conditional probabilities where the numerator
assumes paternity and the denominator assumes a random man of similar ethnic background
was the father. The numerator is the probability of observed genotypes, given the tested man
is the father, while the denominator is the probability of the observed genotypes, given that a
random man is the father. The paternity index then is a likelihood ratio of two probabilities
conditional upon different competing hypotheses. This likelihood ratio reflects how many
times more likely it is to see the evidence (e.g., a particular set of alleles) under the first
hypothesis compared to the second hypothesis. When mating is random, the probability that
the untested alternative father will transmit a specific allele to his child is equal to the allele
frequency of the specific allele under consideration.
4.4.5.2 Reverse parentage testing
In identification of remains as part of missing persons investigations or mass disaster victim
identification work, the question under consideration may be whether or not a child belongs to
the mother and father tested or other biological references available. This is essentially the
opposite question as that asked in parentage testing, namely, given a child’s genotype, who are
the parents? The samples examined may be the same family trio as studied in parentage testing:
alleged mother, alleged father, and child. Unfortunately, it is normally a luxury to have samples
from both parents available. Typically, only a single parent or sibling samples are available,
which makes the reverse parentage analysis more challenging.

Fig: Illustration of question being asked with (a) parentage testing and (b) reverse
parentage testing

4.4.5.3 Disaster Victim Identification (DVI)


Mass disasters, whether natural or man-made, can involve loss of life for many victims of the
tragedy. Efforts to identify these victims are referred to as disaster victim identification, or
DVI. In the United States, DNA testing has now become routine and expected in disaster victim
identification in the event of a plane crash, large fi re, or terrorist attack. Military casualties are
also identified through STR typing or mitochondrial DNA sequencing by the Armed Forces
DNA Identification Laboratory (AFDIL). All airplane crashes within the United States are
examined by the National Transportation Safety Board, which often contracts with AFDIL to
identify the air crash victims through DNA testing as part of the investigation. Often mass
disasters leave human remains that are literally in pieces or burned beyond recognition. In some
cases, it is possible to visually identify a victim, but body parts can be separated from one
another and the remains comingled, making identification without DNA techniques virtually
impossible. The use of fingerprints and dental records (odontology) still plays an important
role in victim identification but these modalities obviously require a finger or an intact skull or
jawbone along with previously archived fingerprint and dental records that can be made
available for comparison purposes. DNA testing has a major advantage in that it can be used
to identify each and every portion of the remains recovered from a disaster site, provided (1)
that there is sufficient intact DNA present to obtain a DNA type and (2) a reference sample is
available for comparison purposes from a surviving family member or some verifiable personal
item containing biological material. Personal items from the deceased including toothbrushes,
combs, razors, or even dirty laundry can provide biological material to generate a reference
DNA type for the victim. The direct comparison of DNA results from disaster victim remains
to DNA recovered from personal items represents the easiest way to obtain a match — and
hence an identification — provided it is possible to verify the source (e.g., the toothbrush was
not used by some other household member). The use of DNA from biological relatives
necessitates the added complexity of kinship analysis similar to that employed for paternity or
reverse parentage testing.
DVI always involves comparison of post-mortem (PM) and ante-mortem (AM) data. PM data
are generated from the recovered human remains, which may be highly fragmented depending
on the type of disaster. AM data come from either direct reference samples (e.g., toothbrushes
or razors known to belong to the victim) or kinship comparisons to biological relatives (e.g.,
parent, child, or sibling).

4.4.5.4 Missing Persons Investigations


An estimated 40,000 unidentified human remains have been recovered and are currently
located in medical examiner and coroners’ offices around the United States. Every year in the
United States, tens of thousands of people become ‘missing,’ often under suspicious
circumstances. While some of these missing persons are later located alive through law
enforcement efforts, many become unidentified human remains that resulted from criminal
activity, such as rape and murder. Knowledge of who the victim is can help solve these crimes
and bring closure to families of the missing. There are three categories of samples associated
with missing persons cases: direct reference samples, family reference samples, and
unidentified human remains (UHR) samples. The UHR samples are generally skeletal remains
(bones), teeth, or tissue. Much of the data from missing person investigations is in the form of
mitochondrial DNA sequences since this information can be successfully recovered from
highly degraded samples. Mitochondrial DNA also enables access to a larger number of
reference samples from maternal relatives of a victim. Some possible direct reference samples
include medical samples from the missing individual, such as a new born screening bloodspot
or a biopsy sample. Personal effects, such as a toothbrush or hairbrush, may also provide direct
reference samples. Family reference samples can be buccal swabs from close biological
relatives, such as parents, children, or siblings of the missing individual. More distant relatives,
such as maternal aunts, maternal or paternal uncles, or maternal or paternal cousins, can also
be useful if mitochondrial or Y-chromosome DNA testing is performed. A combination of
samples from more than one close relative can help provide greater confidence in this kinship
analysis (D.N.A. Box 17.6).
UNIT

5
5.1 Role of Iris Biometric in personal identification

Over the past 15 years, iris recognition has developed rapidly from its first live demonstration
and first actual method patent, to a mainstream field of biometric development with a large
number of active researchers in both academia and industry. To date, some 50 million persons
worldwide have been enrolled in iris recognition systems that use the author's algorithms. But
other systems are also being developed, demonstrated, and tested in Government-sponsored
competitions with good results; and there is no doubt that in the future there will exist a lively
equilibrium of diverse methods and viable products available for deployment, maybe even
interoperable.

The iris is the colored, donut-shaped portion of the eye behind the cornea and surrounds the
pupil. A person’s iris pattern is unique and remains unchanged throughout life. Also, covered
by the cornea, the iris is well protected from damage, making it a suitable body part for
biometric authentication.

Fig: Biometric features of Human Eye

Because iris recognition is designed for use in identification mode (\one-to-many" exhaustive
search, at least with the author's algorithms) so that a user is not even asked to claim or assert
an identity, as opposed to simple verification (a \one-to-one" test of some claimed identity), the
number of iris comparisons done so far is staggering. In one particular deployment linking all
27 air, land, and sea-ports of entry into the United Arab Emirates, that compares the irises of
arriving travellers to all stored in a central database, some 5 trillion iris comparisons have been
performed since 2001. About 10 million arriving travellers have used that system, with 12
billion iris comparisons now being performed daily at a speed of about 1 million comparisons
per second per search engine. UK has recently launched Project IRIS (Iris Recognition
Immigration System) which allows travellers to enter the UK from abroad without passport
presentation or any other assertion of identity, but just by looking at an iris camera at an
automatic gate. If they have been enrolled in the system and are recognized, then their border-
control formalities are finished and the gate opens. About 200,000 travellers to the UK in recent
months have benefitted from this convenience.

Notwithstanding such existing public deployments at many airports in several countries, basic
research into alternative methods continues. The anticipated large-scale applications of
biometric technologies such as iris recognition are driving innovations at all levels, ranging
from sensors, to user interfaces, to algorithms and decision theory. At the same time as these
good innovations, possibly even outpacing them, the demands on the technology are becoming
greater. In addition to the requirement to function on a national scale to detect any multiple
identities upon issuance of biometric ID cards, expectations are also being raised for
development of more tolerant and fluid user interfaces that aim to replace the \stop and stare"
camera interface with iris recognition \on the move, off-axis, and at a distance". Scientific and
engineering literature about iris recognition grows monthly, with contributions from several
dozen university and industrial laboratories around the world. Many databases of iris images
are available for download, further stimulating research.
An excellent and comprehensive review of the expanding literature about iris recognition, with
141 references, has recently appeared. I will not attempt to duplicate such a literature survey
here. Instead, I will briefly review the historical development of iris recognition, from its
inception as a speculative idea to early efforts at commercialization, and its current drivers; and
then I will present a number of new methods that I have developed and found beneficial, which
I will illustrate here with publicly available iris images.

4.2 A Short History of Iris Recognition

4.2.1 Early Speculation about its Possibility


Divination of all sorts of things based on iris patterns goes back to ancient Egypt, to Chaldea
in Babylonia, and to ancient Greece, as documented in stone inscriptions, painted ceramic
artefacts, and the writings of Hippocrates. Clinical divination persists today as \iridology." The
idea of using the iris as a distinguishing human identifier was suggested in 1885 by French
physician Alphonse Bertillon, describing both color and pattern type. In 1949 British
ophthalmologist James Doggart commented specifically on the complexity of iris patterns and
suggested that they might be sufficiently unique to serve in the same way as fingerprints. In
1987 American ophthalmologists Flom and Safir managed to patent Doggart's concept, but
they had no algorithm or specific method to make it possible. They acknowledged that they
had encountered the idea in Doggart's book, yet their patent asserted claim over any method
for iris recognition, if any could actually be developed. (Ironically, the only specific method
disclosed in this patent was a conceptual flow chart for controlling an illuminator to drive the
pupil to a pre-determined size; in fact, this proves unnecessary and is not a feature of any actual
iris recognition system.) Although the Flom-Safir patent has now expired worldwide, its broad
if unimplemented claim over any use of the iris for human identification inhibited developers
from trying to create actual methods. In 1989 when I was teaching at Harvard University, Flom
and Safir (who by coincidence was my neighbour in Cambridge MA) asked me to try to create
actual methods for iris recognition, which I did and patented. After live demonstrations, we
formed a company to exploit these algorithms.

4.2.2 Commercialization Efforts, 1993{2006


The company that we founded licensed my algorithms to a number of camera developers and
security-related systems integrators. Those algorithms used methods that persist widely today
in this technology, such as multi-scale Gabor wavelet encoding, binarization based on zero-
crossings, Exclusive-OR bit vector comparison logic, and Hamming Distance similarity
metrics. Unfortunately, a new management installed by new investors focused on re-branding
and media positioning tactics more than on technology considerations.
During the period 2001-2006, a crucial mistake was advocacy of clearly inferior cameras when
superior ones were available and well proven, only because of their comparative royalty
streams. For example, one camera widely recognized (even internally) as the \gold standard,"
featuring autofocus, autozoom, and superb resolution, was de-licensed and banned from use by
partners, while inferior cameras lacking such features were promoted and even mandated by
an ostensible \certification" program. Consequently, a number of high-profile test deployments
generated unnecessarily poor results, and the inherent powers and potential of iris recognition
were generally clouded if not contradicted by apparently high rates of Failures-to-Enroll or
False Rejections. Disputes over licensing terms with several of the company's own partners,
especially the more successful partners, escalated into a series of lawsuits.
Finally, having expended its funding resources both on aggressive litigation and on re-branding
exercises that tried to portray the algorithms as having been developed in-house, the company
had made enemies of those it most needed as friends, and collapsed. Never having become
profitable, it was acquired in 2006 solely for its IP assets (patents and my core algorithms) by
a multi-biometric holding company. Meanwhile a number of other start-ups offering iris
recognition have appeared; but they too, with the exception of one systems integrator, are all
struggling to survive. None have any public deployments using their own proprietary
algorithms. For nearly all investors in this sector up until the present time, iris recognition has
proved an enticing but perilous seduction, like the song of the Sirens (irresistible sea nymphs
in Greek mythology) who drowned many passing sailors.

4.2.3 Current Stimulants


Iris recognition technology research and development today is expanding rapidly, at several
dozen universities and industrial research venues. Enthusiasm for the technology and its
potential is strong, as is the level of innovation in response to its undeniable challenges,
particularly regarding image capture. Among the stimulants that seem to be driving this
creative energy are:

1. Evidence emerging in tests that iris recognition seems the biometric with best performance,
in terms of large database accuracy and search speed.
2. Legislation in several countries for national programs involving biometric ID cards, or
biometrics replacing passports in automated border-crossing.
3. NIST Iris Challenge Evaluation (\large-scale") based on images from 240 Subjects; its
training database was downloaded by 42 research groups.
4. Biometric Data Interchange Format Standards, and databases of iris images for algorithm
development and testing.
5. Numerous international conferences and books that include the topic.
6. Popular futurism and movies, from James Bond to Minority Report.
7. Cultural iconography associated with the eye (the \Window to the Soul;" affective
significance of eye contact, and communication through gaze).
8. The intellectual pleasure of solving multi-disciplinary problems combining mathematics,
information theory, computer vision, statistics, biology, ergonomics, decision theory, and
naturally occurring human randomness.
Features of iris recognition
1. Highly accurate and fast, iris recognition boasts of having top class precision among
different types of biometric authentication technologies.
2. Remains unchanged throughout life. (This does not constitute a guarantee.)
3. Since the iris is different between the left and right eye, recognition can be performed
separately by each eye.
4. Possible to distinguish twins.
5. As long as the eyes are exposed, iris recognition can be used even when the subject is
wearing a hat, mask, eyeglasses or gloves.
6. Because of using an infrared camera, recognition is available even at night or in the
dark.
7. Without the need to touch the device, contactless authentication is possible, making it
hygienic to use.

Mechanism of iris recognition

1. First, the location of the pupil is detected, followed by detection of the iris and the
eyelids.
2. Unnecessary parts (noise), such as eyelids and eyelashes, are excluded to clip out only
the iris part, which is then divided into blocks and converted into feature values to quantify
the image.
3. Matching is then performed with feature data previously extracted in the same methods.

Fig: Active Contours, Flexible Generalized Embedded Coordinates

Iris recognition begins with finding an iris in an image, demarcating its inner and outer
boundaries at pupil and sclera, detecting the upper and lower eyelid boundaries if they occlude,
and detecting and excluding any superimposed eyelashes, or reflections from the cornea or
eyeglasses. These processes may collectively be called segmentation. Precision in assigning
the true inner and outer iris boundaries, even if they are partly invisible, is important because
the mapping of the iris in a dimensionless (size-invariant and pupil dilation invariant)
coordinate system is critically dependent on this. Inaccuracy in the detection, modelling, and
representation of these boundaries can cause different mappings of the iris pattern in its
extracted description, and such differences could cause failures to match. It is natural to start
by thinking of the iris as an annulus. Soon one discovers that the inner and outer boundaries
are usually not concentric. A simple solution is then to create a non-concentric pseudo-polar
coordinate system for mapping the iris, relaxing the assumption that the iris and pupil share a
common center, and requiring only that the pupil is fully contained within the iris. This \doubly-
dimensionless pseudo-polar coordinate system" was the basis of my original paper on iris
recognition and Patent, and this iris coordinate system was incorporated into ISO Standard
19794-6 for iris data. But soon one discovers also that often the pupil boundary is non-circular,
and usually the iris outer boundary is non-circular. Performance in iris recognition is
significantly improved by relaxing both of those assumptions, replacing them with more
disciplined methods for faithfully detecting and modelling those boundaries whatever their
shapes, and defining a more flexible and generalized coordinate system on their basis. Because
the iris outer boundary is often partly occluded by eyelids, and the iris inner boundary may be
partly occluded by reflections from illumination, and sometimes both boundaries also by
reflections from eyeglasses, it is necessary to fit flexible contours that can tolerate interruptions
and continue their trajectory under them on a principled basis, driven somehow by the data that
exists elsewhere. A further constraint is that both the inner and outer boundary models must
form closed curves. A final goal is that we would like to impose a constraint on smoothness,
based on the credibility of any evidence for non-smooth curvature.

An excellent way to achieve all of these goals is to describe the iris inner and outer boundaries
in terms of “Active Contours" based on discrete Fourier series expansions of the contour data.
By employing Fourier components whose frequencies are integer multiples of 1/(2¼), closure,
orthogonality, and completeness are ensured. Selecting the number of frequency components
allows control over the degree of smoothness that is imposed, and over the fidelity of the
approximation. In essence, truncating the discrete Fourier series after a certain number of terms
amounts to low-pass filtering the boundary curvature data in the active contour model. In the
lower left- hand corner of the Figure is shown two snakes," each consisting of a fuzzy ribbon-
like data distribution and a dotted curve which is a discrete Fourier series approximation to the
data, including continuation across gap interruptions. The lower snake in each snake box is the
curvature map for the pupil boundary, and the upper snake is the curvature map for the iris
outer boundary, with the endpoints joining up at the 6-o'clock position. The interruptions
correspond to detected occlusions by eyelids (indicated by separate splines in both images), or
by specular reflections. The data plotted as the grey level for each snake is the image gradient
in the radial direction. Thus, the relative thickness of each snake represents roughly the
sharpness of the corresponding radial edge. If an iris boundary were well-described as a circular
edge, then the corresponding snake in its box should be flat and straight. In general, this is not
the case.

Fig: Active contours enhance iris segmentation, because they allow for non- circular
boundaries and enable flexible coordinate systems. The box in the lower- left shows
curvature maps for the inner and outer iris boundaries, which would be flat and straight
if they were circles. Here the outer boundary (upper plot) is particularly non-circular.
Dotted curves in the box and on the iris are Fourier series approximations
4.4 Fourier-based Trigonometry and Correction for Off-Axis Gaze
A limitation of current iris recognition cameras is that they require an on-axis image of an eye,
usually achieved through what may be called a \stop and stare" interface in which a user must
align her optical axis with the camera's optical axis. This is not as flexible or fluid as it might
be. Moreover, sometimes the standard cameras acquire images for which the on-axis
assumption is not true. For example, the NIST iris images that were made available and used
for training in ICE-1 contained several with very deviated gaze, probably because the user's
gaze was distracted by an adjacent monitor. The on-axis requirement can be relaxed by
correcting the projective deformation of the iris when it is imaged off-axis, provided that one
can reliably estimate the actual parameters of gaze. The gaze parameters that we seek include
two spherical angles for eye pose, but the projective geometry depends also on the distance
between eye and

Fig: Active contours enhance iris segmentation, because they allow for non- circular
boundaries and enable flexible coordinate systems. The box in the lower- left shows
curvature maps for the inner and outer iris boundaries, which would be flat and
straight if they were circles. Here the pupil boundary (lower plot) is particularly non-
circular. Dotted curves in the box and on the iris are Fourier series approximations
camera which may be unknown, and it depends on the surface curvature of the iris which is
generally not zero. If simplifying assumptions and approximations are made about the latter
factors, then a simple affine projective transformation may suffice to make the iris recognizable

Fig: Gaze estimation enables transformation of an eye image with deviated gaze, into
one apparently looking directly at the camera. Without this transformation, such
images would fail to be matched
against itself as imaged in other poses, orthographic or not. The essence of the problem is then
estimating the two angles of gaze relative to the camera. Eye morphology is so variable in terms
of visible sclera and eyelid occlusion that it is unlikely that such factors could support robust
estimation, at least when only one eye is imaged; although it must be noted that humans are
very impressively skilled somehow at monitoring each other's gaze direction. In the absence of
solving that mystery, an obvious alternative approach would be to assume that an orthographic
image of the iris should reveal a circular pupil; therefore, detecting ellipticity of the pupil
indicates off-axis image acquisition, and so estimating the elongation and orientation of that
ellipse would yield the two parameters of gaze deviation, modulo ¼ in direction. We present
here a somewhat more robust variant of this idea, which does not assume that the true pupil
shape is circular when viewed orthographically. This method of estimating gaze (and thus
correcting for off-axis imaging) uses a new approach that may be called \Fourier-based
trigonometry."

Fig: Statistical inference of eyelashes from the iris pixel histogram, and determination
of a threshold for excluding the lashes (labelled in white) from influencing the IrisCode
Application of Iris Biometrics

1. Immigration Control

In response to the increasing threats of terrorism around the world, iris biometrics
contributes to a safe and secure society by enhancing stringency of immigration control.
iris recognition offers improved security and smooth personal identification amidst the
increasing movement of people between countries.

The iris is photographed, and the image is matched with the government’s immigration
control database during exit or entry procedures at the passport control booth, enabling
rapid and stringent personal authentication.

Fig: Application of Biometric Database in International Terminals

2. National IDs

Iris recognition is used as one of the methods for acquiring biometric data needed for
issuing unique IDs.
Accurate and fast authentication is possible even without an ID card.
Combined use with mutually complementary biometric data, such as fingerprint and
face, enables rigid personal authentication and a robust approach against impersonation.

3. Crime Investigation

A multimodal biometrics database for managing multiple biometric authentication data


is created by taking images of the face, fingerprint, and palm print, as well as the iris.
Combining with fingerprint and other biometric authentication systems enables more
accurate identification of an individual. Combined use of biometric data allows
recognition even when the fingerprint, for example, could not be used for identification
due to injury.

5. Role of Veins Biometric in personal identification


The field of hand vascular pattern technology or vein pattern technology uses the subcutaneous
vascular network on the back of the hand to verify the identity of individuals in biometric
applications. The principle of this technology is based on the fact that the pattern of blood
vessels is unique to each individual, even between identical twins. Therefore, the pattern of the
hand blood vessels is a highly distinctive feature that can be used for verifying the identity of
the individual. Hand vascular pattern biometric technology is relatively new and is in the
process of being continuously refined and developed. The hand vascular pattern was first
considered as a potential technology in the biometric security field in the early 1990s. In 1992,
Shimizu brought into focus the potential for use of the hand vascular technology in his
published paper on trans-body imaging. In 1995, Cross and Smith introduced thermographic
imaging technology for acquiring the subcutaneous vascular network on the back of the hand
for biometric applications. Since then, a large number of research efforts have continuously
contributed to hand vascular pattern technology. It was not until 1997 that the first practical
application was developed. The introduction of BK-100 in 1997 by Alex Hwansoo Choi, the
co-founder of BK systems, was one of the first commercial products based on hand vascular
pattern technology. Using near-infrared light, images of blood vessels on the back of the hand
were acquired by a camera sensitive to the near-infrared light range. The deoxidized
hemoglobin in blood vessels absorbs infrared rays and causes the blood vessels to appear as
black patterns in captured images. The vascular patterns were pre-processed and used for
verification. Several improved versions of this device were developed until the end of 1998. In
2000, Techsphere Co. Ltd., founded by members of BK Systems, continued to research and
develop the technology. During this period, they published their first research paper on the use
of hand vascular pattern technology for personal identification, and other investigations were
conducted to further improve the technology. Based on the results of these efforts, a new
commercial product under the name VP-II was released. In this new product, Techsphere
completely redesigned the BK Systems products and applied many advanced digital processing
technologies to make highly reliable and cost-effective devices. These important design
changes have made hand vascular pattern technology popular in a variety of civilian
applications such as airport security, hospital, or finance and banking. Since the introduction
of hand vascular pattern technology, a number of efforts have been made to develop other
vascular pattern technologies utilizing different parts of the hand such as finger veins and palm
veins. In 2003, Fujitsu announced its first commercial product using the vascular pattern
technology into the general market. Fujitsu Palm Vein products employ vascular patterns on
the palm as a means of extracting biometric features. At the same time, Hitachi developed
another identification system that utilizes vascular pattern in the fingers. Its first commercial
product, finger-vein identification, was also first released into the market in 2003.
Although hand vascular pattern technology is still an ongoing area of biometric research, a
large number of units have been deployed in many applications such as access control, time
and attendance, security, and hospitals. The market for hand vascular pattern technology has
been rapidly growing. Compared to other biometric modalities this technology provides
advantages such as higher authentication accuracy and better usability. Moreover, since
vascular patterns lie under the skin, it is not affected by adverse sensing environments
encountered in applications such as factories or construction sites where other biometric
technologies show limitations. Because of these desirable features, vascular pattern technology
is being incorporated into various authentication solutions for use in public places.

5.1 Development of Hand Vascular Pattern Technology


The history of development of hand vascular pattern technology goes back to early 1997 when
BK Systems announced its first commercial product, BK-100. This product has been mainly
sold in Korean and Japanese markets. In the early stages, the product was limited to physical
access control applications. In 1998, the first patent on hand vascular pattern technology was
assigned to BK systems. This invention described and claimed an apparatus and method for
identifying individuals through their subcutaneous vascular patterns [10]. Based on this
invention, new commercial versions, BK-200 and BK-300, were released to the market.
Unfortunately, the development of these products was discontinued at the end of 1998.
In 2000, Techsphere was founded by several former employees of BK Systems and made
significant improvements to the BK-100 system [3-7] including utilizing advanced digital
imaging technologies and low-cost digital circuits to manufacture more reliable and cost-
effective products. This resulted in the commercial product VP-II in 2001, which was more
compact and therefore more suitable for certain applications. The VP-II included a new
guidance handle so that users could easily align their hand in a proper location under the
scanner and it also provided better user interface to make the system highly configurable.

Fig: Prototype of the first hand vascular commercial product BK-100

As biometric technology matures, there will be an increasing interaction among different


technologies and applications. Hand vascular pattern technology should become an open
solution through which other systems or applications can easily access resources or
information. In addition, it should allow other security vendors with their own proprietary
solutions to integrate with it in a standard protocol. In order to satisfy this requirement, new
protocols have been developed to allow other systems access to all the functionalities of VP-
II. It means that hand vascular pattern technology can be used in large-scale security solutions
such as database server solutions or smart card solutions. To make the product more adaptive
to other products from different vendors, hand vascular pattern technology is being adapted to
national and international standards. In January, 2007, hand vascular pattern technology was
finally adopted by the International Standard Organization (ISO)
5.2 General Applications
Typical application of vascular pattern technology can be classified as follows:
1. Physical access control and Time attendance: Physical access control and time
attendance may be the most widely used application of hand vascular pattern technology.
Utilizing hand vascular pattern technology, solutions have been developed to help manage
employee attendance and overtime work at large organizations in an effective and efficient
manner. The time and attendance solution employing hand vascular technology has enabled
many local governments to enhance work productivity through automation, establish a sound
attendance pattern through personal identification, and boost morale through transparent and
precise budget allocation.

2. Finance and Banking: With the rapid growth of ATM services and credit cards,
fraudulent withdrawal of money by using fake or stolen bankcards has become a serious
problem. Hand vascular pattern technology can be integrated into banking solutions by two
different methods. In the first method, vascular patterns of customers are stored in the bank's
database server. The authentication is carried out by comparing a customer's hand vascular
pattern with their enrolled pattern in the database server. In the second method, hand vascular
patterns of customers are stored in biometric ID cards which are kept by customers. During
authentication, the customer's hand vascular pattern is compared with the pattern stored in the
card for verification. Based on various requirements such as timely response or level of
security, banks will decide the appropriate method for their solutions.

3. Travel and Transportation: Since the 9/11 terrorist attack, national security problems
are of great concern in almost every country. Many security fences have been established in
order to avoid the infiltration of terrorists. Access to many sensitive areas such as airports, train
stations, and other public places are being closely monitored. Hand vascular pattern technology
has been chosen to provide a secure physical access control in many of these areas. Due to its
superior authentication performance, ease of use, and user satisfaction, the hand vascular
system was adopted by Incheon International Airport, the largest airport in Korea, and by
several major international airports in Japan for physical access control.
Fig: General applications of hand vascular pattern technology; (a) Door access control,
(b) Banking solutions, (c) Transportation (airport security), (d) Hospitals, (e)
Construction sites, and (f) Schools

4. Hospitals: Many areas of a hospital require tight security, including medicine cabinets
and storage rooms, operating rooms, and data centres where patient records are managed and
stored. Some sensitive data such as those related to research studies on dangerous virus may
be used with dire consequences if it falls into terrorist hands. Consequently, biometric security
methods should be used to protect such sensitive data. Many hospitals have installed hand
vascular systems as means for physical access control.

5. Construction Sites: Unlike other biometric traits which can be adversely affected by
external factors such as dirt or oil, the hand vascular pattern is robust to these sources of noise
because it lies under the skin of human body. Therefore, the hand vascular pattern technology
is appropriate for use in environments such as factories or construction sites.

6. Schools: The commonly used RF ID cards do not offer high levels of security because
people tend to lose them or fail to return their cards. As a result, many universities have adopted
hand vascular pattern recognition systems to enhance security for valuable equipment in
research laboratories and private belongings in dormitories. It is not only more cost-effective
in the long term but also provides an enhanced level of security through individual
identification and managerial convenience.

In recent years, many hand vascular pattern recognition systems have been deployed in civilian
applications in hospitals, schools, banks, or airports. However, the widest use of hand vascular
pattern recognition is for security management in highly secure places like airports. The typical
deployment of hand vascular pattern recognition systems can be found at Incheon International
Airport, Korea. Incheon International Airport opened for business in early 2001 and became
the largest international airport for international civilian air transportation and cargo traffic in
Korea. After September 11 of 2001 when the terrorist hijackings occurred, the airport's security
system was upgraded to advanced and state-of-the-art security facilities in response to terrorist
threats and various epidemics in southern Asia. The primary goal in selecting hand vascular
pattern recognition systems was to establish a high security access management system and
ensure a robust and stringent employee identification process throughout their IT system. The
configuration of hand vascular pattern recognition systems at Incheon Airport is divided into 3
major areas: enrollment center, server room and entry gates between air and land sides. The
control tower is also access controlled by vascular biometrics.

5.3 Technology
Hand vascular patterns are the representation of blood vessel networks inside the back of hand.
The hand vascular pattern recognition system operates by comparing the hand vascular pattern
of a user being authenticated against a pre-registered pattern already stored in the database.
hand blood vessels from the back of the hand. The near-infrared rays of the camera illuminate
the back of the hand. The deoxidized hemoglobin in blood vessels absorbs the infrared rays
and causes the vascular patterns to appear as black patterns in resulting images. The vascular
patterns are then extracted by various digital signal processing algorithms. The extracted
vascular pattern is then compared against pre-registered patterns in smart storage devices or
database servers to authenticate the individual. Major steps in a typical hand vascular pattern
recognition system are image acquisition, feature extraction, and pattern matching.
Fig: Operation of a typical vascular biometric identification system

5.4 Image Acquisition


Since the hand vascular pattern lies under the skin, it cannot be seen by the human eye.
Therefore, we cannot use visible light, which occupies a very narrow band (approx. 400 -
700nm wavelength), for photographing. Hand vascular patterns can only be captured under the
near-infrared light (approx. 800 - 1000nm wavelength), which can penetrate into the tissues.
Blood vessels absorb more infrared radiation than the surrounding tissue, which causes the
blood vessels to appear as black patterns in the resulting image captured by a charge couple
device (CCD) camera.

Fig: The hand image obtained by visible light (left) and infrared light (right)

To capture the image of blood vessels under near-infrared light, the scanner uses an LED array
to emit the light and illuminate the hand. A CCD camera sensitive to near-infrared light is used
to photograph the image. A near-infrared filter attached in front of the CCD camera is used to
block all undesired visible light emitted by external sources. The image of blood vessels can
be acquired by either reflection or transmission.

1. Transmission method: The hand is illuminated by an LED array and the CCD camera
captures the light that passes through the hand. To use this method, the LED array is above the
hand and the CCD camera is placed on the opposite side of the LED array with respect to the
hand.

2. Reflection method: Here the hand is illuminated by an LED array and the CCD camera
captures the light that is reflected back from the hand. So, the illumination LED array and the
CCD camera are positioned in the same location. The reflection method is preferred since the
transmission method is often sensitive to changes in the hand's light transmittance, which is
easily affected by temperature or weather. If the hand's light transmittance is relatively high,
the blood vessels are not very clear in captured images. In contrast, the light transmittance does
not significantly affect the level or contrast of the reflected light. Another reason why the
reflection method is preferred is due to its easy configuration. Since the illumination LED array
and the CCD camera can be located in the same place, the system is easy to embed into small
devices.

5.5 Feature Extraction


The hand vascular images captured from the acquisition devices contain not only the vascular
patterns but also undesired noise and irregular effects such as shadow of the hand and hairs on
the skin surface. The captured images should be pre-processed before being used for
verification. The aim of a feature extraction algorithm is to accurately extract the vascular
patterns from raw images. A typical feature extraction algorithm commonly consists of various
image processing steps to remove the noise and irregular effects, enhance the clarity of vascular
patterns, and separate the vascular patterns from the background. The final vascular patterns
obtained by the feature extraction algorithm are represented as binary images.
The noise removal algorithm is based on a low-pass filter. To improve the clarity of vascular
patterns in captured images, an enhancement algorithm is commonly used. A number of
algorithms based on filtering techniques have been proposed for enhancing the clarity of
vascular patterns in captured images. However, these algorithms often enhance the vascular
patterns without considering the directional information that is often present. As a result, there
could be some loss of connectivity of vascular patterns which lead to the degradation of
verification performance. Consequently, one should use an appropriate filter that is adaptive to
vascular pattern orientations to efficiently remove undesired noise and preserve the true
vascular patterns.

Fig: Configuration of transmission-based acquisition method

Fig: Configuration of reflection-based acquisition method


Fig: The flow chart of a typical feature extraction algorithm

Fig: Flow chart of the direction-based vascular pattern extraction algorithm. Image is
from

5.6 Pattern Matching


In the matching step, the extracted vascular pattern from the feature extraction step is compared
against the pre-registered pattern in the database to obtain a matching score. The matching
score is then used to compare with the pre-defined system threshold value to decide whether
the user can be authenticated or not. Typical methods that are commonly used for pattern
matching are structural matching and template matching. Structural matching is based on
comparing locations of feature points such as line endings and bifurcations extracted from two
patterns being compared to obtain the matching score. This method has been used widely in
fingerprint matching. However, unlike the fingerprint patterns, the hand vascular patterns have
fewer minutiae-like feature points. Therefore, it is not appropriate to apply only this method
for good vascular pattern matching results.

Fig: Example of hand vascular pattern obtained by direction-based vascular pattern


extraction algorithm. Image is from
Template matching is the most popular and widely used method for matching the vascular
patterns. It is based on the comparison of pixel values of two vascular pattern images and has
been commonly used for matching line-shaped patterns. Moreover, use of template matching
does not require any additional steps to calculate the feature points such as line endings and
bifurcations and is robust for vascular pattern matching.

6. Role of Palm Biometric in personal identification

There has been an ever-growing need to automatically authenticate individuals for various
applications, such as information confidentiality, homeland security, and computer security.
Traditional knowledge-based or token-based personal identification or verification is
unreliable, inconvenient, and inefficient. Knowledge-based approaches use something that you
know" to make a personal identification, such as password and personal identity number.
Token-based approaches use something that you have" to make a personal identification, such
as passport or ID card. Since those approaches are not based on any inherent attributes of an
individual to make the identification, they cannot differentiate between an authorized person
and an impostor who fraudulently acquires the “token" or “knowledge" of the authorized
person. This is why biometric systems have become prevalent in recent years. Biometrics
involves identifying an individual based on his/her physiological or behavioral characteristics.
Many parts of our body and various behaviours are embedded with information for personal
identification. In fact, using biometrics for person authentication is not new, it has been
implemented for thousands of years. Numerous research efforts have been aimed at this subject
resulting in the development of various techniques related to signal acquisition, feature
extraction, matching and classification. Most importantly, various biometric systems including
fingerprint, iris, hand geometry, voice and face recognition systems have been deployed for
various applications. According to the International Biometric Group (IBG, New York), the
market for biometric technologies will nearly double in size this year alone. Among all
biometrics, hand-based biometrics, including hand geometry and fingerprint, are the most
popular biometrics gaining 60% market share in 2003.
The palmprint system is a hand-based biometric technology. Palmprint is concerned with the
inner surface of a hand. A palm is covered with the same kind of skin as the fingertips and it is
larger than a fingertip in size. Many features of a palmprint can be used to uniquely identify a
person, including
(a) Geometry Features: According to the palm's shape, we can easily get the corresponding
geometry features, such as width, length and area. (b) Principal Line Features: Both location
and form of principal lines in a palmprint are very important physiological characteristics for
identifying individuals because they vary little over time. (c) Wrinkle Features: In a palmprint,
there are many wrinkles which are different from the principal lines in that they are thinner and
more irregular. (d) Delta Point Features: The delta point is defined as the center of a delta-like
region in the palmprint. Usually, there are delta points located in the finger-root region. (e)
Minutiae Features: A palmprint is basically composed of the ridges, allowing the minutiae
features to be used as another significant measurement. It is quite natural to think of using
palmprint to recognize a person, similar to fingerprint, hand geometry and hand vein.

Fig: Different features on a palm


Some companies, including NEC and PRINTRAK, have developed several palmprint systems
for criminal applications. On the basis of fingerprint technology, their systems exploit high
resolution palmprint images to extract detailed features like minutiae for matching the latent
prints. Such an approach is not suitable for developing a palmprint authentication system for
civil applications, which requires a fast, accurate and reliable method for personal
identification.

6.1 System Framework


The palmprint authentication system generally, has four major components: User Interface
Module, Acquisition Module, Recognition Module and External Module. The functions of each
component are listed below:
1. User Interface Module provides an interface between the system and users for the smooth
authentication operation. It is crucial to develop a good user interface such that users are happy
to use the device.
2. Acquisition Module is the channel for the palmprints to be acquired for the further
processing.
3. Recognition Module is the key part of our system, which will determine whether a user is
authenticated. It consists of image pre-processing, feature extraction, template creation,
database updating, and matching.
4. External Module receives the signal from the recognition module, to allow some operations
to be performed or deny the operations requested. This module is actually an interfacing
component, which may be connected to other hardware or software components.

6.2 Recognition Engine


After the palmprint images are captured by the Acquisition Module, they are fed into the
recognition engine for palmprint authentication. The recognition engine is the key part of the
palmprint authentication system, consisting of: image pre-processing, feature extraction, and
matching.

6.2.1 Image Pre-processing


When capturing a palmprint, the position, direction and stretching degree may vary from time
to time. As a result, even the palmprints from the same palm could have a little rotation and
translation. Also, the sizes of palms are different from one another, so the pre-processing
algorithm is used to align different palmprints and extract the corresponding central part for
feature extraction. In our palmprint system, both rotation and translation are constrained to
some extent by the capture device panel, which positions the palms with several pegs.
Fig: The breakdown of each module of the palmprint authentication system
Fig: The palmprint authentication system installed at BRC. (b) The inter- face of
palmprint acquisition device: 1-the key pad, 2-LCD display, 3-palm putting and
location flat surface

Fig: Localizing the salient region of the palm. H1 and H2 are the boundary of the gaps
between the two fingers, and T1 and T2 are the tangent points of H1 and H2, respectively.
The central part is extracted at a desired distance from line joining T1 and T2
symmetrically positioned about a perpendicular line passing through the mid-point of T1
and T2
6.2.2 Feature extraction
A single circular zero DC (direct current) Gabor filter is applied to the preprocessed palmprint
images and the phase information is coded as a feature vector called PalmCode.
6.2.3 Feature Matching
Feature matching determines the degree of similarity between the identification template and
the master template. In this work, the normalized Hamming distance is implemented for
comparing two Fusion Codes.
6.3 Robustness
As a practical biometric system, in addition to accuracy and speed, robustness of the system is
important. Here, we present three experiments to illustrate the robustness of our system. The
first tests the effects of jewellery such as rings, on the accuracy of some preprocessing
algorithms. The second tests noise on the palmprints, which directly affects the performance
of the system. The third experiment tests the ability of the system to identify palmprints of
identical twins.
A test of identical twins is regarded as an important test for biometric authentication that not
all biometrics, including face and DNA, can pass. However, the palmprints of identical twins
have enough distinctive information to distinguish them.

Fig: Identical twins palmprints. (a), (b) are their left hands, and (c), (d) are their right
hands, respectively
7. Role of Facial Biometric in personal identification

A facial recognition system is a technology capable of identifying or verifying a person from


a digital image or a video frame from a video source. There are multiple methods in which
facial recognition systems work, but in general, they work by comparing selected facial
features from given image with faces within a database. It is also described as a Biometric
Artificial Intelligence based application that can uniquely identify a person by analysing patterns
based on the person’s facial textures and shape.

Robust face recognition systems are in great demand to help fight crime and terrorism. Other
applications include providing user authentication for access control to physical and virtual
spaces to ensure higher security. However, the problem of identifying a person by taking an
input face image and matching with the known face images in a database is still a very
challenging problem. This is due to the variability of human faces under different operational
scenario conditions such as illumination, rotations, expressions, camera viewpoints, aging,
makeup, and eyeglasses. Often, these various conditions greatly affect the performance of face
recognition systems especially when the systems need to match against large scale databases.
This low performance on face recognition prevents systems from being widely deployed in real
applications (although many systems have been deployed, their use and accuracy is limited to
particular operational scenarios) where errors like the false acceptance rate (FAR) and the false
rejection rate (FRR) are considered in advance. FAR is the probability that the systems
incorrectly accept an unauthorized person, while FRR is the probability that the systems
wrongly reject an authorized person.
Recently, 3D face recognition has gained attention in the face recognition community due to
its inherent capability to overcome some of the traditional problems of 2D imagery such as
pose and lighting variation. Commercial 3D acquisition devices can obtain a depth map (3D-
shape) of the face. These usually require the user to be in very close proximity to the camera;
additionally, some devices will require the user to be still for several seconds for a good 3D
model acquisition. In contrast 2D face acquisition can work from a distance and not require
significant user co-operation. This is the trade-off with desiring to work with 3D shape data.
The fusion of visual and thermal face recognition can be found at reporting multi-model based
face recognition systems lead to improved performance than single modality systems.
Fig: Simple Process of Face recognition system
The technology for Face Recognition system can vary but steps remain more or less similar in
all the conditions:
Step 1. A picture of your face is captured from a photo or video. Your face might appear alone
or in a crowd. Your image may show you looking straight ahead or nearly in profile.
Step 2. Facial recognition software reads the geometry of your face. Key factors include the
distance between your eyes and the distance from forehead to chin. The software identifies facial
landmarks — one system identifies 68 of them — that are key to distinguishing your face. The
result: your facial signature.
Step 3. Your facial signature — a mathematical formula — is compared to a database of known
faces. And consider this: at least 117 million Americans have images of their faces in one or
more police databases.
Step 4. A determination is made. Your faceprint may match that of an image in a facial
recognition system database.
7.1 Face Recognition Techniques
Face recognition algorithms can be classified into two broad categories according to feature
extraction schemes for face representation: feature-based methods and appearance-based
methods. Properties and geometric relations such as the areas, distances, and angles between
the facial feature points are used as descriptors for face recognition. On the other hand,
appearance-based methods consider the global properties of the face image intensity pattern.
Typically, appearance-based face recognition algorithms proceed by computing basis vectors
to represent the face data efficiently. In the next step, the faces are projected onto these vectors
and the projection coefficients can be used for representing the face images.

7.2 Databases
There are several publicly available face databases for the research community to use for
algorithm development, which provide a standard benchmark when reporting results. Different
databases are collected to address a different type of challenge or variations such as
illumination, pose, occlusion, etc.

7.2.1 Face Recognition Grand Challenge (FRGC) database


The Face Recognition Grand Challenge (FRGC) conducted by the NIST is aimed at an
objective and systematic evaluation of face recognition algorithms under different challenging
conditions. Simultaneously, the aim for the FRGC is to push researchers to develop the next
generation face recognition algorithms that can reduce the error rate in face recognition systems
by an order of magnitude over the Face Recognition Vendor Test (FRVT) 2002 results. The
FRGC data is partitioned into three datasets: a generic training set which one can use to train
the face recognition system, the target set, and the probe set, captured under un-controlled
conditions. The FRGC generic training set contains 12,776 images (from 222 subjects) taken
under controlled and uncontrolled illuminations. The gallery set contains 16,028 images (from
466 subjects, with some overlap with subjects in the generic training set) under controlled
illumination while the probe set contains 8,014 images (from 466 subjects) under uncontrolled
illumination.

Fig: Sample images of the FRGC database


7.2.2 FERET database
Prior to the FRGC, the NIST organized the FERET database and evaluation protocol to
facilitate the development of commercial face recognition systems. The FERET database is
designed to measure the performance of face recognition algorithms on a large database in
practical settings. The FERET program provides a large database of facial images taken from
1,199 individuals and collected between August 1993 and July 1996 to support algorithm
development and evaluation. The FERET database consists of 14,126 images of 1,564 sets
(1,199 original sets and 365 duplicate sets). For development purposes, 503 sets of images were
released to the researchers, and the remaining sets were sequestered for independent evaluation.

7.2.3 AR database
The AR face database was created by the Computer Vision Center (CVC), at ‘Universitat of
Autµonoma de Barcelona’. It contains over 4,000 color images corresponding to 126 people's
faces (70 men and 56 women). The images acquired are frontal view pose with different facial
expressions, illumination conditions, and occlusions (such as people wearing sun glasses and
a scarf) making this database one of the more popular ones for testing face recognition
algorithms in the presence of occlusion. No restrictions on wear (clothes, glasses, etc.), make-
up, hair style, etc. were imposed to the participants. Each person participated in two sessions,
two weeks apart.

7.2.4 Yale Face database


The Yale database contains 165 Gray-scale images in GIF format of 15 individuals. There are
11 images per subject, one for each variation such as different facial expression, center-light,
with glasses, happy, left-light, with and without glasses, normal, right-light, sad, sleepy,
surprised, and wink. The Yale Face Database was extended to the Yale Face Database B, which
contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions
(9 poses x 64 illumination conditions). For every subject in a particular pose, an image with
ambient (background) illumination was also captured.

7.3 Applications

Nowadays, industry integrates cutting‐edge, face recognition research into the development of
the latest technologies for commercial applications.
7.3.1 Security

Face recognition is one of the most powerful processes in biometric systems and is extensively
used for security purpose in tracking and surveillance, attendance monitoring, passenger
management at airports, passport de‐duplication, border control and high security access
control as developed by companies like Aurora.

AFR (Automated Face Recognition) is applied in forensics for face identification, face retrieval
in still image databases or CCTV sequences, or for facial sketch recognition. It could also help
law enforcement through behaviour and facial expression observation, lie detection, lip
tracking and reading.

Moreover, AFR is now used in the context of ‘Biometrics as a Service’, within cloud‐based,
online technologies requiring face authentication for trustworthy transactions. For
example, MasterCard developed an app which uses selfies to secure payments via mobile
phones. In this MasterCard’s app, AFR is enhanced by facial expression recognition as the
application requires the consumer blinks to prove that s/he is human.

7.3.2 Multimedia

In our today's life, AFR engines are embedded in a number of multi‐modal applications such
as aids for buying glasses or for digital make‐up and other face sculpting or skin smoothing
technologies, e.g., designed by Anthropics.

In social media, many collaborative applications within Facebook, Google or Yahoo! are
calling upon AFR. Applications such as Snapchat require AFR on mobile. With 200 million
users of which half of those engage on daily basis, Snapchat is a popular image messaging and
multimedia mobile application, where ‘snaps’, i.e., a photo or a short video, can be edited to
include filters and effects, text caption and drawings. Snapchat has features such as the ‘Lens’,
which allows users to add real‐time effects into their snaps by using AFR technologies, and
‘Memories’ which searches content by date or using local recognition systems.

Other multimedia applications are using AFR, e.g., in face naming to generate automated
headlines in Video Google, in face expression tracking for animations and human‐computer
interfaces (HCI), or in face animation for socially aware robotics. Companies such as Double
Negative Visual Effects or Disney Research propose also AFR solutions for face synthesis and
face morphing for films and games visual effects.
8. Role of Voice Biometrics in personal identification

Recent data on mobile phone users all over the world, the number of telephone landlines in
operation, and recent VoIP (Voice over IP networks) deployments, confirm that voice is the
most accessible biometric trait as no extra acquisition device or transmission system is needed.
This fact gives voice an overwhelming advantage over other biometric traits, especially when
remote users or systems are taken into account. However, the voice trait is not only related with
personal characteristics, but also with many environmental and sociolinguistic variables, as
voice generation is the result of an extremely complex process.
Thus, the transmitted voice will embed a degraded version of speaker specificities and will be
influenced by many contextual variables that are difficult to deal with. Fortunately, state-of-
the-art technologies and applications are presently able to compensate for all those sources of
variability allowing for efficient and reliable value-added applications that enable remote
authentication or voice detection based just in telephone-transmitted voice signals.

8.1 Applications
Due to the pervasiveness of voice signals, the range of possible applications of voice biometrics
is wider than for other usual biometric traits. We can distinguish three major types of
applications which take advantage of the biometric information present in the speech signal:

1. Voice authentication (access control, typically remote by phone) and back- ground
recognition (natural voice checking).
2. Speaker detection (e.g., blacklisting detection in call centres or wiretapping and
surveillance), also known as speaker spotting.
3. Forensic speaker recognition (use of the voice as evidence in courts of law or as intelligence
in police investigations).

8.2 Technology
The main source of information encoded in the voice signal is undoubtedly the linguistic
content. For that reason, it is not surprising that depending on how the linguistic content is used
or controlled, we can distinguish two very different types of speaker recognition technologies
with different potential applications.
Firstly, text-dependent technologies, where the user is required to utter a specific key-phrase
(e.g., \Open, Sesame") or sequence have been the major subject of biometric access control and
voice authentication applications. The security level of password-based systems can then be
enhanced by requiring knowledge of the password, and also requiring the true owner of the
password to utter it. In order to avoid possible theft recordings of true passwords, text-
dependent systems can be enhanced to ask for random prompts, unexpected to the caller, which
cannot be easily fabricated by an impostor.
The second type of speaker recognition technologies are those known as text-independent.
They are the driving factor of the remaining two types of applications, namely speaker
detection and forensic speaker recognition. Since the linguistic content is the main source of
information encoded in the speech, text-independency has been a major challenge and the main
subject of re- search of the speaker recognition community in the last two decades. The NIST
SRE (Speaker Recognition Evaluations) conducted yearly since 1996, have fostered excellence
in research in this area, with extraordinary progress obtained year by year based in blind
evaluation with common databases and protocols, and very specially the sharing of information
among participants in the follow-up workshop after each evaluation.

8.2.1 Identity information in the speech signal


This section deals with how the speaker specificities are embedded into the speech signal.
Speech production is an extremely complex process whose result depends on many variables
at different levels, including from sociolinguistic factors (e.g., level of education, linguistic
context and dialectal differences) to physiological issues (e.g., vocal tract length, shape and
tissues and the dynamic configuration of the articulatory organs). These multiple influences
will be simultaneously present in each speech act, and some or all of them will contain
specificities of the speaker. For that reason, we need to clarify and clearly distinguish the
different levels and sources of speaker information that we should be able to extract in order to
model speaker individualities.

8.2.1.1 Language generation and speech production


The process by which humans are able to construct a language-coded message has been the
subject of study for years in the area of psycholinguistics. But once the message has been coded
in the human brain, a complex physiological and articulatory process is still needed to finally
produce a speech waveform (the voice) that contains the linguistic message (as well as many
other sources of information, one of which is the speaker identity) encoded as a combination
of temporal-spectral characteristics. This process is the subject of study of phoneticians and
some other speech analysis related areas (engineers, physicians, etc.). Details on language
generation and speech production can be found in [50], [27], [41]. The speech production
process. In both stages of voice production (language generation and speech production),
speaker specificities are introduced. In the field of voice biometrics, also known as speaker
recognition. The two components correspond with which is usually known as high-level
(linguistic) and low-level (acoustic) characteristics.

8.2.1.2 Multiple information levels


Experiments with human listeners have shown, as our own experience tells us, that humans
recognize speakers by a combination of different information levels, and what is especially
important, with different weights for different speakers (e.g., one speaker can show very
characteristic pitch contours, and another one can have a strong nasalization which make them
\sound" different). Automatic systems will intend to take advantage of the different sources of
information available, combining them in the best possible way for every speaker.
Idiolectal characteristics of a speaker are at the highest level that is usually taken into account
by the technology to date, and describe how a speaker use a specific linguistic system. This
\use" is determined by a multitude of factors, some of them quite stable in adults such as level
of education, sociological and family conditions and town of origin. But there are also some
high-level factors which are highly dependent on the environment, as e.g., a male doctor does
not use language in the same way when talking with his colleagues at the hospital (sociolects),
with his family at home, or with his friend’s playing cards. As second major group of
characteristics going down towards lower information levels in the speech signal, we find
phonotactics, which describe the use by each speaker of the phone units and possible
realizations available. Phonotactics are essential for the correct use of a language, and a key in
foreign language learning, but when we look into phonotactic speaker specificities we can find
certain usage patterns distinctive from other users.
In a third group we find prosody, which is the combination of instantaneous energy, intonation,
speech rate and unit durations that provides speech with naturalness, full sense, and emotional
tone. Prosody determines prosodic objectives at the phrase and discourse level, and define
instantaneous actions to comply with those objectives. It helps to clarify the message, the type
of message (declarative, interrogative, imperative), or the state of mind of the speaker. But in
the way each speaker uses the different prosodic elements, many speaker specificities are
included, such as, for example, characteristic pitch contours in start and end of phrase or accent
group.
Finally, at the lower level, we find the short-term spectral characteristics of the speech signals,
directly related to the individual articulatory actions related with each phone being produced
and also to the individual physiological configuration of the speech production apparatus. This
spectral information has been the main source of individuality in speech used in actual
applications, and the main focus of the research for almost twenty years. Spectral information
intends to extract the peculiarities of speaker's vocal tracts and their respective articulation
dynamics. Two types of low-level information have been typically used, static information
related to each analysis frame and dynamic information related to how this information evolves
in adjacent frames, taking into account the strongly speaker-dependent phenomenon of co-
articulation, the process by which an individual dynamically moves from one articulation
position to the next one.

8.2.1.3 Feature Extraction and Tokenization


The first step in the construction of automatic speaker recognition systems is the reliable
extraction of features and tokens that contain identifying information of interest. This section
briefly shows the procedures used to extract both short-term feature vectors (spectral
information, energy, pitch) and mid-term and long-term tokens as phones, syllables and words.

8.2.1.4 Short-term analysis


In order to perform reliable spectral analysis, signals must show stationary properties that are
not easy to observe in constantly-changing speech signals. However, if we restrict our analysis
window to short lengths between 20 and 40ms., our articulatory system is not able to
significantly change in such a short time frame, obtaining what is usually called pseudo-
stationary signals per frame. Those windowed signals can be assumed, due to pseudo-
stationarity, to come from a specific LTI (linear time- invariant) system for that frame, and then
we can perform, usually after using some kind of cosine-like windowing as hamming, spectral
analysis over this short-term window, obtaining spectral envelopes that change frame by frame.

8.2.1.5 Parameterization
This short-time hamming windowed signals have all of the desired temporal/spectral
information, albeit at a high bit rate (e.g., telephone speech digitized with sampling frequency
8 kHz in a 32ms. window means 256 samples x 16 bits/sample = 4096 bits = 512 bytes per
frame). Linear Predictive Coding (LPC) of speech has proved to be a valid way to compress
the spectral envelope in an all-pole model (valid for all non-nasal sounds, and still a good
approximation for nasal sounds) with just 10 to 16 coefficients, which means that the spectral
information in a frame can be represented in about 50 bytes, which is 10% of the original bit
rate. Instead of LPC coefficients, highly correlated among them (covariance matrix far from
diagonal), pseudo-orthogonal cepstral coefficients are usually used, either directly derived as
in LPCC (LPC-derived Cepstral vectors) from LPC coefficients, or directly obtained from a
perceptually-based mel-filter spectral analysis as in MFCC (Mel-Frequency based Cepstral
Coefficients). Some other related forms are described in the literature, as PLP (Perceptually
based Linear Prediction), LSF (Line Spectral Frequencies) and many others, not detailed here
for simplicity. By far, one of the main factors of speech variability comes from the use of
different transmission channels (e.g., testing telephone speech with microphone-recorded
speaker models). Cepstral representation has also the advantage that invariant channels add a
constant cepstral offset that can be easily subtracted (CMS-Cepstral Mean Subtraction), and
non-speech cepstral components can also be eliminated as done in RASTA filtering of cepstral
instantaneous vectors. In order to take coarticulation into account, delta (velocity) and delta-
delta (acceleration) coefficients are obtained from the static window-based information,
computing an estimate of how each frame coefficient varies across adjacent windows (typically
between §3, no more than §5).

8.2.1.6 Phonetic and word tokenization


Hidden Markov Models (HMM) are the most successful and widely used tool (with the
exception of some ANN architectures for phonetic, syllable and word tokenization, that is, the
translation from sampled speech into a time-aligned sequence of linguistic units. Left-to-Right
HMMs are state- machines which statistically model pseudo stationary pieces of speech (states)
and the transitions (left-to-right forced, keeping a temporal sense) between states, trying to
imitate somehow the movements of our articulatory organs, which tend to rest (in all non-
plosive sounds) in articulatory positions (assumed as pseudo stationary states) and
continuously move (transition) from one state to the following. Presently, most HMMs model
the information in each state with continuous probability density functions, typically mixtures
of gaussians. This particular kind of models are usually known as CDHMM (Continuous
Density HMM, as opposite to the former VQ-based Discrete Density HMMs). HMM training
is usually done through Baum-Welch estimation, while decoding and time alignment is usually
performed through Viterbi decoding. The performance of those spectral-only HMMs is
improved by the use of language models, which impose some linguistic or grammatical
constraints on the almost infinite combination of all possible units. To allow for increased
efficiency, pruning of the beam search is also a generalized mechanism to significantly
accelerate the recognition process with no or little degradation on the performance.

8.2.1.7 Prosodic tokenization


Basic prosodic features as pitch and energy are also obtained at a frame level. The window
energy is very easily obtained through Parseval's theorem, either in temporal or spectral form,
and the instantaneous pitch can be determined by, e.g., autocorrelation or cepstral-
decomposition based methods, usually smoothed with some time filtering. Other important
prosodic features are those related with linguistic units’ duration, speech rate, and all those
related with accent. In all those cases, precise segmentation is required, marking the syllable
positions and the energy and pitch contours to detect accent positions and phrase or speech turn
markers. Phonetic and syllabic segmentation of speech is a complex issue that is far from solved
and although it can be useful for speaker recognition, prosodic systems do not always require
such a detailed segmentation.

8.3 Text-dependent speaker recognition


Speaker recognition systems can be classified into two broad subtypes: text- dependent and
text-independent. The former uses the lexical content of the speech for speaker recognition,
while the latter try to minimize the influence of the lexical content, which is considered
unknown for the recognition of the speaker. This distinction makes these two subtypes of
speaker recognition systems very different in terms both of techniques used and of potential
applications. This section is devoted to text-dependent speaker recognition systems, which find
their main application in interactive systems where collaboration from the users is required in
order to authenticate their identities. The typical example of these applications is voice
authentication over the telephone for interactive voice response systems that require some level
of security like banking applications or password reset. The use of a text-dependent speaker
recognition system requires, similarly to other biometric modalities, an enrollment phase in
which the user provides several templates to build a user model and a recognition phase in
which a new voice sample is matched against the user model.

8.4 Classification of systems and techniques


We can classify text-dependent speaker recognition systems from an application point of view
into two types: fixed-text and variable-text systems. In fixed-text systems, the lexical content
in the enrollment and the recognition samples is always the same. In variable-text systems, the
lexical content in the recognition sample is different in every access trial from the lexical
content of the enrollment samples. Variable-text systems are more flexible and more robust
against attacks that use recordings from a user or imitations after hearing of the true speaker
uttering the correct password. An interesting possibility is the generation of a randomly
generated password prompt that is different each time the user is verified (text-prompted
system), thus making it almost impossible to use a recording. With respect to the techniques
used for text-dependent speaker recognition, it has been demonstrated that information present
at different levels of the speech signal (glottal excitation, spectral and suprasegmental features)
can be used effectively to detect the user's identity. However, the most widely used information
is the spectral content of the speech signal, determined by the physical configuration and
dynamics of the vocal tract. This information is typically summarized as a temporal sequence
of MFCC vectors, each of which represents a window of 20-40ms of speech. In this way, the
problem of text-dependent speaker recognition is reduced to a problem of comparing a
sequence of MFCC vectors to a model of the user. For this comparison there are two methods
that have been widely used: template-based methods and statistical methods. In template-based
methods the model of the speaker consists of several sequences of vectors corresponding to the
enrollment utterances, and recognition is performed by comparing the verification utterance
against the enrollment utterances. This comparison is performed using Dynamic Time Warping
(DTW) as an effective way to compensate for time misalignments between the different
utterances. While these methods are still used, particularly for embedded systems with very
limited resources, statistical methods, and in particular Hidden Markov Models (HMMs), tend
to be used more often than template-based models. HMMs provide more flexibility, allow to
choose speech units from sub-phoneme units to words and enable the design of text-prompted
systems.

8.4.2 Databases and benchmarks


The first databases used for text-dependent speaker verification were databases not specifically
designed for this task like the TI-DIGITS and TIMIT databases. One of the first databases
specifically designed for text-dependent speaker recognition research is YOHO. It consists of
96 utterances for enrollment collected in 4 different sessions and 40 utterances for test collected
on 10 sessions for each of a total of 138 speakers. Each utterance consists in different sets of
three-digit pairs. This is probably the most extended and well-known benchmark for
comparison and is frequently used to assess text-dependent systems. However, the YOHO
database has several limitations. For instance, it only contains speech recorded on a single
microphone in a quiet environment and was not designed to simulate informed forgeries (i.e.,
impostors uttering the password of a user). More recently the MIT Mobile Device Speaker
Verification Corpus has been designed to allow research on text-dependent speaker verification
on realistic noisy conditions, while the BIOSEC Baseline Corpus has been designed to simulate
informed forgeries (including also bilingual material and several biometric modalities besides
voice). One of the main difficulties of the comparison of different text-dependent speaker
verification systems is that these systems tend to be language dependent, and therefore many
researchers tend to present their results in their custom database, making it impossible to make
direct comparisons. The comparison of different commercial systems is even more difficult.
Fortunately, a recent publication compares the technical performance of a few commercial
systems. However, as with other biometric modalities, technical performance is not the only
dimension to evaluate and other measures related to the usability of the systems should be
evaluated as well.

8.5 Text-independent speaker recognition


Text-independent speaker recognition have been largely dominated, since 1970s to the end of
20th century, by short term spectral-based systems. Since 2000, higher level systems started to
be developed with good enough results in the same highly challenging tasks (NIST SR
evaluations). However, spectral systems have continued to outperform high-level systems
(NIST 2006 SRE was the latest benchmark by the time of writing), with the best detection
results due to recent advanced channel compensation mechanisms.

8.5.1 Short-term spectral systems


When short-time spectral analysis is used to model the speaker specificities, we are modelling
the different “sounds" a person can produce, especially due to his/her own vocal tract and
articulatory organs. As humans need multiple sounds (or acoustically different symbols) to
speak in any common language, we are clearly facing a multiclass space of characteristics.
Vector Quantization techniques are efficient in such multiclass problems, and have been used
for speaker identification, typically obtaining a specific VQ model per speaker, and computing
the distance from any utterance to any model as the weighted sum of the minimum per frame
distances to the closest code vector of the codebook. The use of boundaries and centroids
instead of probability densities yields poorer performance for VQ than for fully-connected
Continuous Density HMMs, known as ergodic HMMs (E-HMM). However, the critical
performance factor in E-HMM is the product number of states times number of Gaussians per
state, which strongly cancels the influence of transitions in those fully-connected models. Then,
a 5-state 4-Gaussian per state E-HMM system will perform similarly than a 4-state 5-
Gaussian/state, a 2- state 10-Gaussian/state, or even, what is especially interesting, a 1-state 20
Gaussian/state system, which is generally known as GMM or Gaussian Mixture Model. Those
one-state E HMMs, or GMMs, have the large advantage that avoids both Baum-Welch
estimation for training, as no alignment between speech and states is necessary (all speech is
aligned with the same single state), and Viterbi decoding for testing (again no need for time
alignment), which accelerates computation times with no degradation of performance. GMM
is a generative technique where a mixture of multidimensional gaussians tries to model the
underlying unknown statistical distribution of the speaker data. GMM became the state-of-the-
art technique in the 1990's, both when maximum likelihood (through Expectation-
Maximization, EM) or discriminative training (Maximum Mutual Information, MMI) was
used. However, it was the use of MAP adaptation of the means from a Universal Back- ground
Model (UBM) which gave GMMs a major advantage over other techniques, especially when
used with compensation techniques as Z-norm (impostor score normalization), T-norm
(utterance compensation), H-norm (handset dependent Z-norm), HT-norm (H+T-norm) or
Feature Mapping (channel identification and compensation). Discriminative techniques such
as Artificial Neural Networks have been used for years, but their performance never
approached that of GMMs.
However, the availability in the late 90's of Support Vector Machines (SVM) as an efficient
discriminatively trained classifier, has given GMM its major competitor as equivalent
performance is obtained using SVM in a much higher dimensional space when appropriate
kernels such as GLDS (Generalized Linear Discriminant Sequence Kernel) are used. Recently,
the use of SuperVectors, a mixed GMM-SVM technique that considers the means of the GMM
for every utterance (both in training and testing) as points in a very high dimensional space
(dimension equals the number of mixtures of the GMM times the dimension of the
parameterized vectors) using an SVM per speaker to classify unknown utterances from the
trained speaker hyperplane.
A main advantage of SuperVectors is that they fit perfectly into new channel compensation
methods based on detecting those directions with maximum variability between different
recordings from the same speaker, trying to cancel or minimize their effect. Several related
techniques of this family have emerged, as Factor Analysis (channel and speaker factors),
Nuisance Attribute Projection (NAP) or Within Class Covariance Normalization (WCCN), all
of them showing significant enhancements over their respective baseline systems.
8.5.2 Idiolectal systems
Most text-independent speaker recognition systems were based on short-term spectral features
until the work of Doddington opened a new world of possibilities for improving text-
independent speaker recognition systems. Doddington realized and proved that speech from
different speakers differ not only on the acoustics, but also on other characteristics like the
word usage. In particular, in his work he modelled the word usage of each particular speaker
using an n-gram that modelled word sequences and their probabilities and demonstrated that
using those models could improve the performance of a baseline acoustic/spectral GMM
system. More important than this particular result is the fact that this work boosted research in
the use of higher levels of information (idiolectal, phonotactic, prosodic, etc.) for text-
independent speaker recognition. After the publication of this work a number of researchers
met at the summer workshop SuperSID where these ideas were further developed and tested
on a common testbed.

8.5.3 Phonotactic systems


A typical phonotactic speaker recognition system consists of two main building blocks: the
phonetic decoders, which transform speech into a sequence of phonetic labels and the n-gram
statistical language modelling stage, which models the frequencies of phones and phone
sequences for each particular speaker. The phonetic decoders typically based on Hidden
Markov Models (HMMs) can either be taken from a pre-existing speech recognizer or trained
ad hoc. For the purpose of speaker recognition, it is not very important to have very accurate
phonetic decoders and it is not even important to have a phonetic decoder in the language of
the speakers to be recognized. This somewhat surprising fact has been analyzed in showing
that speaker-dependent phonetic errors made by the decoder seem to be speaker specific, and
therefore useful information for speaker recognition as long as these errors are consistent for
each particular speaker. Once a phonetic decoder is available, the phonetic decoding of many
sentences from many different speakers can be used to train a Universal Background Phone
Model (UBPM) representing all the possible speakers. Speaker Phone Models (SPMi) are
trained using several phonetic decoders of each particular speaker. Since the speech available
to train a speaker model is often limited, speaker models are interpolated with the UBPM to
increase robustness in parameter estimation. Once the statistical language models are trained,
the procedure to verify a test utterance against a speaker model SPMi. The first step is to
produce its phonetic decoding, X, in the same way as the decoding used to train SPMi and
UBPM. Then, the phonetic decoding of the test utterance, X, and the statistical models (SPMi,
UBPM) are used to compute the likelihoods of the phonetic decoding, X, given the speaker
model SPMi and the background model UBPM. The recognition score is the log of the ratio of
both likelihoods. This process, which is usually described as Phone Recognition followed by
Language Modelling (PRLM) may be repeated for different phonetic decoders (e.g., different
languages or complexities) and the different recognition scores simply added or fused for better
performance, yielding a method known as Parallel PRLM or PPRLM. Recently, several
improvements have been proposed on the baseline PPRLM systems. One of the most important
in terms of performance improvement is the use of the whole phone recognition lattice instead
of the one-best decoding hypothesis. The recognition lattice is a directed acyclic graph
containing the most likely hypotheses along with their probabilities. This much richer
information allows for a better estimation of the n-grams on limited speech materials, and
therefore for much better results. Other important improvement is the use of SVMs for
classifying the whole n-grams trained with either the one-best hypotheses or with lattices,
instead of using them in a statistical classification framework.

8.5.4 Prosodic systems


One of the pioneering and most successful prosodic systems in text-independent speaker
recognition is the work of Adami. The system consists of two main building blocks: the
prosodic tokenizer, which analyzes the prosody, and represents it as a sequence of prosodic
labels or tokens and the n-gram statistical language modelling stage, which models the
frequencies of prosodic tokens and their sequences for each particular speaker. Some other
possibilities for modelling the prosodic information that have also proved to be quite successful
are the use of Non-uniform Extraction Region Features (NERFs) delimited by long-enough
pauses or NERFs defined by the syllabic structure of the sentence (SNERFs).
The authors have implemented a prosodic system based on Adami's work in which the second
block is exactly the same for phonotactic and prosodic speaker recognition with only minor
adjustments to improve performance. The tokenization process consists of two stages. Firstly,
for each speech utterance, temporal trajectories of the prosodic features, (fundamental
frequency -or pitch- and energy) are extracted. Secondly, both contours are segmented and
labelled by means of a slope quantification process.
8.5.5 Databases and Benchmarks
In the early 1990s, text-independent speaker recognition was a major challenge, with a future
difficult to foresee. By that time, modest research initiatives were developed with very limited
databases, resulting in non-homogenous publications with no way to compare and improve
systems in similar tasks. Fortunately, in 1996 NIST started the yearly Speaker Recognition
Evaluations, which have been undoubtedly the driving force of significant advances. Present
state-of-the-art performance was totally unexpected just 10 years ago. This success has been
driven by two factors. Firstly, the use of common databases and protocols in blind evaluation
of systems has permitted fair comparison between systems on exactly the same task. Secondly,
the post evaluation workshops have allowed participants to share their experiences,
improvements, failures, etc. in a highly cooperative environment. The role of the LDC
(Linguistic Data Consortium) providing new challenging speech material is also noticeable, as
the needs have been continuously increasing (both in amount of speech and requirements in
recording). From the different phases of Switch-board to the latest Fisher-style databases, much
progress has been made. Past evaluation sets (development, train and test audio and keys -
solutions-) are available through LDC for new researchers to evaluate their systems without
competitive pressures. Even though “official" results have been restricted to participants, it is
extremely easy to follow the progress of the technology as participants often present their new
developments in Speaker ID sessions in international conferences as ICASSP or InterSpeech
(formerly EuroSpeech), or the series of ISCA/IEEE Odyssey workshops.
UNIT

6
5
NEW AND DEVELOPING FORMS OF
BIOMETRIC IDENTIFICATION

Introduction
Scientific advancement and ongoing security requirements have led to the develop-
ment of new forms of biometric identification. Biometric identification technolo-
gies are improving and becoming less expensive, allowing for wider adoption and
increased accuracy. This chapter describes the introduction of new and emerging
biometric modalities, including advancements in physiological (first generation) and
behavioural (second generation) biometric modalities. First, new developments in
physiological forms of identification are considered, including ear, vascular, ocular
(retina and iris) and voice recognition. Subsequently, the developing field of
behavioural biometrics is reviewed, including a discussion of gait recognition,
keystroke dynamics and cognitive biometrics. The principles, application and issues
associated with each new biometric modality are outlined, demonstrating a range
of possible applications in crime and security, including advantages and dis-
advantages that should be considered. Concerns associated with the security of new
biometric systems are examined, along with other related issues.

First generation biometrics


The first generation of biometrics that have been discussed so far in this book are
derived from purely physiological traits: fingerprints, DNA, facial structure. Phy-
siological traits used for the purposes of biometric identification are known as ‘first
generation’ biometrics. The first generation biometrics examined throughout this
section include those based on the ears, blood vessels, eyes and voice. Second
generation biometrics, also known as behavioural biometrics, measure learned
behaviours. The second generation biometrics considered in this chapter include
gait, keystroke and cognitive biometrics. Voice recognition is a unique biometric
72 New forms of biometric identification

identifier that combines both physiological and behavioural characteristics. In contrast


with first generation biometrics, second generation biometrics are easier to change
and mimic, presenting additional issues relating to accuracy, security and reliability.

Ear recognition

Principles
Ear recognition involves the automated extraction and comparison of the anato-
mical features of the human ear for the purposes of identification and verification
(Pun & Moon, 2004). The use of the human ear to identify individuals was first
suggested by French criminologist Alphonse Bertillon (1853–1914), who used
measurements of the ear in his Bertillonage system to identify recidivists, and the
first system of ear recognition was developed in 1949, integrating 12 measurements
of the outer ear (Abaza et al., 2013).
Ear recognition involves the extraction and comparison of the unique features of
the outer ears. Human ears can be used as unique identifiers because human ear
growth is proportional to age, does not change radically across the lifespan and is not
influenced by changes in expression (Anwar, Ghany &, 2015). Human ears are
unique among individuals, including identical twins, making them a suitable
biometric (Pflug & Busch, 2012). Researchers have obtained 98 per cent accuracy in
identification using ear recognition in controlled environments (Anwar et al., 2015).

Application and issues


Ear recognition is a developing field and has not been as widely implemented as
other emerging biometric modalities (Pun & Moon, 2004), and research into new
methods of ear recognition, for example, three-dimensional imaging is ongoing
(Ali & Islam, 2013). Ear recognition can be integrated with other forms of bio-
metric identification such as facial recognition in order to address accuracy issues in
a single biometric modality (Wang et al., 2012). As ear recognition can identify
subjects at a distance, it can be implemented into smart closed circuit television
(CCTV) surveillance systems or used for the purposes of border control (Pflug &
Busch, 2012). It has been reported that the US Immigration and Naturalization
Service (USINS) specifies the right ear should be visible in identification photographs.
This may indicate that ear recognition is currently used for border control identifica-
tion in the United States, although this is unclear from open source literature (Kumar
& Srinivasan, 2014).
Ear recognition shares many of the issues associated with facial recognition,
including the potential impact of lighting, head rotation and the potential for a
subject to cover their features. As ears may easily be hidden by hair or headwear,
ear recognition would be more suited to the identification of cooperative subjects.
As ears have a smaller surface area, head rotation is likely to have greater impact on
the accuracy of identification (Pun & Moon, 2004).
New forms of biometric identification 73

One of the main advantages of ear recognition over facial recognition is that ear
recognition requires a smaller image size, and therefore of similar resolution
meaning that it requires less memory for image storage and processing (Pun &
Moon, 2004). Further, in comparison with faces, ears have greater uniformity in
colour distribution, and less variability as a result of changes in expression (Pun &
Moon, 2004). It is believed that ear recognition is the most promising biometric
modality to be combined with facial recognition systems, as it can provide addi-
tional information on both sides of the face (Abaza et al., 2013). When combined
with facial recognition systems, ear recognition can provide further contextual
information to offset some of the adverse impacts and barriers to facial recognition
accuracy such as illumination, pose and change of expression (Wang et al., 2012).
In comparison with other modalities of biometric identification, ear recognition
does not require specialist imaging equipment, is contactless, less invasive and stable
over time (Pun & Moon, 2004).

Vascular pattern recognition

Principles
Vein or vascular pattern recognition (VPR) involves the imaging, extraction and
comparison of subcutaneous vascular networks located under the skin, usually in
hands and fingers, for the purposes of verifying identity. Vein recognition differs
from other forms of first generation biometrics as it uses a non-visible physiological
characteristic for the purposes of authentication. An infrared light source and
infrared camera are used to identify the vein pattern concealed under the skin. The
haemoglobin present in blood reflects the infrared light and provides visibility of
the blood vessels (Smorawa & Kubanek, 2015). After the structure of the veins is
obtained, vein recognition involves the same comparison methods as biometric
fingerprinting (Smorawa & Kubanek, 2015). Like fingerprints, the pattern of blood
vessels is unique and stable across an individual’s life. The use of finger and palm veins
consistently demonstrates very high rates of identification accuracy in comparison with
fingerprint identification (Benziane & Benyettou, 2016).

Application and issues


The main application of vein recognition is identity verification, particularly in
access control situations, such as border security and banking at automatic teller
machines. Collinson (2014) reports that the use of vein patterns in fingers is a common
method of identification for Japanese bank customers. Infrared vein recognition
cameras can be mobile and integrated into other biometric systems. In the future, it is
expected that vein recognition technology would be suitable for integration with
computer, mobile telephone and vehicle entry applications (Sandhu & Kaur, 2015).
As vein networks are not externally visible, it is very difficult to change or copy
(Sandhu & Kaur, 2015). Further, vein recognition is contactless and non-invasive,
74 New forms of biometric identification

and given the high level of security it offers, and its convenience, it is expected that
its applications will increase over time. Disadvantages of vein recognition include the
necessity of infrared cameras and the potential for the ambient environmental tem-
perature and medical conditions to affect its accuracy (Benziane & Benyettou, 2016).

Occular biometrics

Principles
Ocular biometrics involve the extraction and comparison of the anatomical features of
the eye. The main structures of the eye include the cornea, lens, optic nerve, retina,
pupil and iris. To date, iris recognition is the most common application; however,
increasing attention is being paid to retina recognition, particularly as it is considered to
be one of the most secure biometric modalities (Nigam, Vatsa & Singh, 2015).
Retina recognition utilises the vascular pattern of the retina. A retinal scanner is used
to illuminate a region of the retina through the pupil, capturing the vascular pattern of
the retina (Borgen, Bours & Wolthusen, 2009). Retina recognition is believed to be
the most secure form of biometrics, due to its stability, uniqueness and the fact that it is
very difficult to copy and replicate the vascular pattern of the retina (Waheed et al.,
2016). However, retina recognition has not been widely adopted to date because of
the cost of the highly specialised equipment and high levels of cooperation required of
users. Due to the high level of security it offers, retina recognition has most commonly
been implemented in military and nuclear facilities (Nigam et al., 2015).
The iris is the coloured part of the eye situated between the pupil and the sclera
(the white part of the eye) (Ives et al., 2013). It consists of a series of layers of blood
vessels that form distinct and complex patterns. The unique lattice of the iris forms
at eight months gestation, and remains stable throughout an individual’s life, with
the exception of disease or trauma. The iris is therefore more stable than other
forms of biometric modalities, such as faces and fingerprints. Iridial patterns are not
only unique for each individual, but also between each eye (Pierscionek, Crawford
& Scotney, 2008). Iris recognition involves the capture, extraction and comparison
of these patterns. As is the case with other forms of biometric identification, the
main stages of iris recognition include image acquisition, feature or pattern extraction,
template generation and comparison (Ives et al., 2013).
Iris recognition is a non-invasive procedure: multiple frames of high-resolution
grey scale images are required, illuminated with infrared or, in some cases, visible
light (Borgen et al., 2009). Ongoing development of sensor technology has enabled
more flexibile iris recognition systems; however, current iris recognition technology
limits collection distances to approximately 30 centimetres (Nigam et al., 2015).

Application and issues


The adoption of biometric iris recognition is increasing with developments in
technology and increased consumer demand both in public and commercial
New forms of biometric identification 75

sectors. There has been a large-scale adoption of iris recognition technology for
security applications, particularly in the areas of border control (Borgen et al.,
2009). Iris scanners are currently deployed in many major airports around the
world (Pierscionek et al., 2008). The United Arab Emirates (UAE) uses iris recog-
nition at land, sea and air border points, and maintains a database of 1.1 million iris
templates, one of the largest databases of its kind in the world. Between 2010 and
2013, the UAE conducted iris searches for 10.5 million individuals, identifying
124,435 people who were attempting to return to the UAE with forged identifi-
cation documents (Ives et al., 2013). It has also been suggested that iris recognition
could be used to reduce electoral fraud. There have been trials of iris based
recognition for voter registration systems in African countries (Bowyer, Ortiz &
Sgroi, 2015). Lecher & Brandom (2016) report that a Federal Bureau of Investi-
gation (FBI) pilot programme, beginning in 2013, has collected iris scans from over
434,000 residents of the United States. This information is obtained via information-
sharing arrangements with local law enforcement agencies, US Border Patrol and
the US military.
Iris and retina recognition have the advantage of high levels of accuracy; how-
ever, there are some questions about the stability of iris patterns over time, due, for
example, to medication, surgery, disease and aging (Pierscionek et al., 2008). These
include glaucoma, macular degeneration, cataracts and pathological angiogenesis
(Borgen et al., 2009). An area of potential future development in iris recognition
technology is the capture of iris images while a subject is in motion or is unco-
operative (Colores et al. 2011), and the development and use of mobile platforms
to capture iris images (Barra et al., 2015). A recent development in this area are
walk-through systems that can capture iris images without the subject stopping or
removing glasses (Ives et al., 2013). Studies have even shown some success in
capturing iris patterns while subjects were wearing sunglasses (Latman & Herb, 2013).
It is anticipated that in the future there will be greater adoption of walk-through iris
recognition in transportation, immigration and government facilities, as well as the
development of mobile or portable iris recognition systems (Ives et al., 2013).
In mid-2017, it was reported that the iris recognition system on widely used
smartphones could easily be circumvented by using the night mode of a digital
camera to take an infrared picture of the phone user’s eyes from a moderate dis-
tance, print-out a life size picture and hold it in front of the phone (Meyer, 2017).
Examples such as this highlight the importance of further investment in measures
to counter circumvention, otherwise the substantial amounts spent on research and
development may be undermined upon release of the technology.

Voice recognition

Principles
Voice recognition applies the individual characteristics of the human voice as a
biometric identifier through the extraction and comparison of voice samples
76 New forms of biometric identification

(Galka, Masior & Salasa, 2014). Unlike other forms of biometric analysis and
identification, this biometric involves a combination of both physiological and
behavioural characteristics. One of the main advantages of speaker recognition is
that there are several voice characteristics that can be analysed and compared,
enabling a high level of accuracy in identification (Morgen, 2012).
The physiological characteristics of human voices relate to anatomical differences
in the biological structure of the vocal tract. There are three main areas of the vocal
tract that are known as the infraglottic, glottal and supraglottic areas that influence
voice production. When a person speaks, the effects of air pressure, muscle tension
and elasticity of the vocal folds are modulated to create different sounds. The
frequency of sound pressure patterns are analysed for biometric identification
purposes. The behavioural features of voices are influenced by how an individual
has learned to speak, including their vocabulary, accent, intonation, pitch, pro-
nunciation and conversational patterns (Mazaira-Fernandez, Alvarez-Marquina &
Gomez-Vilda, 2015).

Application and issues


As with the other physiological forms of biometrics discussed throughout the
text, applications for voice recognition include the identification of unknown
individuals, and verification of identity. Voice recognition has applications in
banking and government service provision via telephone, or access control
(Kaman et al., 2013). It also has applications in audio forensics, where other types
of biometric identification are not available, such as a case where a suspect is
wearing a mask. Voice recognition can also be used to identify individuals on
social media videos or intercepted phone conversations (Mazaira-Fernandez et al.,
2015).
There are a wide range of applications and technologies utilising voice recognition,
such as personal computers, mobile devices and social robotics. For example, the
Australian Taxation Office introduced voluntary ‘voiceprint’ technology for callers
to verify their identities when contacting the agency. By 2016, more than 1.5
million Australians had been enrolled within the voiceprint programme (Nuance
Communications, 2016).
The main issues associated with voice recognition relate to the variability in an
individual’s voice as a consequence of their mood, health or the aging process,
which can all impact on various characteristics of the voice (Mazaira-Fernandez
et al., 2015). There is also the potential for issues associated with ambient or
environmental noise, and distortions (Chenafa et al., 2008). In response to concerns
that individuals may be able to disguise their voices through electronic means,
researchers are currently developing new methods to successfully identify electro-
nically disguised voices (Wu, Wang & Huang, 2014). Key advantages of voice
recognition include its non-invasive nature, the fact that it does not require
specialised hardware, aside from a traditional microphone, and that it can be conducted
remotely via a telephone or the Internet (Kaman et al., 2013).
New forms of biometric identification 77

Second generation biometrics


In comparison with first generation physiological biometrics, second generation
biometrics concern the analysis of learned behaviour, and are described as beha-
vioural biometrics. The development of behavioural biometrics is more recent than
physiological biometrics, and has been described as second generation biometric
identification. They have been developed from an analysis of learned behaviour,
and are more likely to change over time than physiological identifiers. Notwith-
standing issues in relation to stability, accuracy and reliability, behavioural bio-
metrics have a wider range of applications in comparison with physiological
biometrics. The biometrics that will be considered here include gait recognition,
keystroke dynamics and cognitive biometrics.

Gait recognition

Principles
Gait recognition is situated within the broader field of human motion analysis,
involving the examination and comparison of human kinesiology (Neves et al.,
2016). Everyone has a unique and regular pattern of motion when walking, relating
to the movement of their limbs. Gait recognition involves the measurement, analysis
and comparison of human movement made by an individual when they walk
(Chaurasia et al., 2015).
Gait recognition is one of the more recent forms of biometric identification to
be developed and coincides with computer processing advancements (Nixon &
Carter, 2006). There are a number of stages involved in gait recognition. These
involve capturing a walking sequence captured from video input, creating a
movement silhouette and the extracting of static and dynamic features across a
sufficient period of time. Movement silhouettes are transformed into a gait cycle,
depicting a sufficient walking period to be used for the purposes of comparison and
identification (Indumathi & Pushparani, 2016). In addition to walking patterns, gait
recognition systems can also collect and analyse the physical appearance of indivi-
duals such as the height, length of limbs, shape and size of torso (Zhang, Hu &
Wang, 2011). Identification therefore occurs through both shape (physiological
features) and motion (behavioural features) (Nixon & Carter, 2006; Choudhury &
Tjahjadi, 2013).
Gait recognition technology has reached 90 per cent accuracy in identification,
provided there are analogous environmental conditions in the comparison footage.
However, the walking surface and clothing can influence the recognition rates
(Nixon & Carter, 2006). Different camera viewpoints can also improve the rate of
identification accuracy. For example, in a study of gait recognition published in
2016, Bouchrika and colleagues obtained an identification accuracy rate of 73 per
cent for gait features extracted from individual camera viewpoints, which could be
increased to an identification accuracy rate of 92 per cent with cross-camera
78 New forms of biometric identification

matching. Other research groups have achieved a similar average identification


accuracy rate utilising several camera viewpoints (see Goffredo et al., 2010).
The most widely implemented method of gait analysis involves integration with
video surveillance (Chaurasia et al., 2015). This facilitates automatic analysis of
routinely collected surveillance footage (Zhang et al., 2011). This is significant,
because gait analysis cannot only be used to identify individuals, but also to identify
and automatically alert police and security to abnormal movement and behaviour
(Zhang et al., 2011). Gait analysis can also be used to identify individuals and track
their movement through public spaces in real-time (Zhang et al., 2011).
There are several different types of gait analysis and recognition, depending on
the specific technology used (De Marsico & Mecca, 2016). In addition to vision-
based approaches, floor-based technology can capture walking pattern and weight,
where specialist equipment has been installed. Gait recognition can be conducted
through the use of wearable sensors used to capture baseline data about the way an
individual moves (Zhang et al., 2011). Current developments in the area of gait
recognition includes the use of infrared image sequences from video footage taken
at night (Lee, Belkhatir & Sanei, 2014), and identification on the basis of the sound
of footsteps, known as acoustic analysis (Altaf, Butko & Juang, 2015).

Application and issues


The fact that gait analysis can be conducted from a distance means that a key
application of this biometric is integration with video surveillance systems. Gait
analysis can enable automated identification detection at a distance, in contrast with
most other biometric modalities (Lee et al., 2014). Other applications include
automatic door opening in security-sensitive environments such as banks and airports
(Indumathi & Pushparani, 2016). Makihara et al. (2015) discuss the application of
gait analysis to secure a criminal conviction in a 2004 bank robbery case in Denmark,
solved using gait analysis comparison of video footage from the crime scene and
subsequent recordings of suspects.
Gait recognition differs from other biometric modalities in a number of important
ways. As has been discussed, gait recognition and face recognition are the only
biometric modalities that can currently be used to identify and monitor or track
people unobtrusively without their cooperation or knowledge (Katiyar, Pathak &
Arya, 2013). Gait recognition can occur at distances of 10 metres or more (Lee
et al., 2014). In contrast, facial recognition is more suitable at short range, and
requires higher-resolution images (Lee et al., 2014). If a complete facial image
cannot be obtained, or an individual hides their face, or distance and image quality
are unsuitable for facial recognition, gait recognition may provide a suitable alter-
native. The combination of gait recognition and facial recognition can enable
improved accuracy (Zhang et al., 2011).
One concern with gait recognition is the potential for learned replication of an
individual’s walking style to defeat the recognition system (Hadid et al., 2015).
Recent research has attempted to devise new methods of analysis to overcome
New forms of biometric identification 79

these issues; however, in general, gait recognition can provide an important addi-
tional layer of security when used in combination with other modalities, such as
face or footprint biometrics (Chaurasia et al., 2015; Katiyar et al., 2013).

Keystroke dynamics

Principles
Keystroke dynamic recognition enables authentication via the identification of
individual typing characteristics and patterns, including key press durations (Revett,
2009). Although keystroke dynamic recognition was first invented in the 1980s, it
is now being used more frequently, in line with the increased use of computers and
the expansion of the Internet (Rudrapal, Das & Debbarma, 2014). Like other
forms of behavioural biometrics, keystroke dynamics are generally considered less
reliable than physiological biometrics due to the variability of this type of human
behaviour (Revett, 2009).
At the enrolment stage of keystroke dynamic recognition, individual typing
characteristics are extracted to create a digital typing signature (Revett, 2009). At
enrolment the user is typically asked to repeatedly enter their details to extract the
typing profile (Revett, 2009). These characteristics are used to develop a profile of
an individual user that forms a reference for future verification (Revett, 2009).
However, some researchers have argued that keystroke latency and duration is not
sufficient for authentication, and proposed other combinations of typing char-
acteristic metrics (Rudrapal et al., 2014; Ngugi, Tarasewich & Reece, 2012). A
combination of different metrics results in higher authentication accuracy (Ngugi
et al., 2012).
Keystroke dynamic recognition is less accurate than other forms of biometric
recognition; however, it is difficult to compare accuracy rates for keystroke
dynamic recognition across the literature, as different studies use a variety of
metrics. Reliability is directly related to the length of text typed. For example, in a
study by Bergadano, Gunetti & Picardi (2002) a false negative rate of 4 per cent
and a false positive rate of less than 0.01 per cent were obtained. However, in this
study, the participants were required to type 683 characters, a length that would be
too long for a password, and may inhibit wide-scale adoption, depending on the
context. If keystroke dynamics are used for short passwords, this raises questions
about the accuracy of the authentication (Ngugi et al., 2012).

Application and issues


With increasing threats to computer systems and information security, keystroke
dynamics could play a key role in strengthening computer security. There are a
range of applications for keystroke dynamic recognition, including providing
stronger authentication, identity confirmation, user identification and tracking over
the Internet (see discussion in Bergadano et al., 2002). Keystroke dynamics can
80 New forms of biometric identification

enhance computer security by adding an additional layer of authentication in


addition to passwords. Keystroke dynamic recognition is used in this way to
strengthen passwords, but it is not typically used alone as a single factor for
authentication, due to the issues of accuracy and variability that have been raised
(Rudrapal et al., 2014).
Keystroke recognition can be static, occurring at login, or continuous, as a
person is typing and interacting with a computer (Monrose & Rubin, 2000).
Software has been developed for use in academic settings to continuously monitor
student typing to help prevent plagiarism. Keystroke dynamic recognition can also
be conducted over the Internet, opening possibilities for remote authentication.
Further applications for keystroke dynamic recognition are the use of behavioural
biometric keypads with pressure sensors that are integrated with access points or
automatic teller machines. Current research is applying keystroke recognition
features to smart phone touchscreens (Kambourakis et al., 2016).
Some of the advantages of keystroke dynamic recognition include that it is
software-based, unobtrusive, can be conducted over the Internet and has a low
implementation cost (Ngugi et al., 2012). Users are already familiar with authen-
tication of their identity with logins and passwords, and, from this perspective, it
may be one of the more acceptable forms of biometrics (Revett, 2009; Karnan,
Akila & Krishnaraj, 2011). It is expected that further use of keystroke dynamic
recognition will occur with increased accuracy in authentication aligned with the
uptake and development of pressure-sensitive keyboards that have recently been
developed and widely marketed (Ngugi et al., 2012).

Cognitive biometrics

Principles
Cognitive biometrics are defined as ‘methods and technologies for the recognition
of humans, based on the measurement of signals generated directly or indirectly
from their thought processes’ (Revett, Deravi & Sirlantzis, 2010, p. 71). These
systems establish authentication via biosignals that reflect the mental states of indi-
viduals, as measured by brain-computer interfaces (BCIs) (Jolfaei, Wu & Muthuk-
kumarasamy, 2013). The use of cognitive biometric systems has become the subject
of increasing attention as the technology has continued to develop in recent years
(Jolfaei et al., 2013; Armstrong et al., 2015).
Neural activity can be used as a biometric signature that reflects individual
mental activities or cognitive processes (Tsuru & Pfurtscheller, 2012). Cognitive
biometrics involve the use of an electroencephalogram (EEG) which is non-invasive
and captures electrical signals produced by the firing of neurons within the brain; it
is used in medicine to measure brain function. This can be undertaken when an
individual performs a certain cognitive task, such as visual perception, memory or
language tasks that activate specific regions of the brain and lead to specific patterns
in EEG activity (Revett et al., 2010). When electrical signals are associated with a
New forms of biometric identification 81

specific stimulus, an event-related potential (ERP) can be obtained (Armstrong


et al., 2015). ERPs therefore correspond to specific cognitive events, for example,
thinking of a specific password (Armstrong et al., 2015). Empirical evidence indicates
that humans ‘generate recordable and reproducible signals that can be captured using
EEG technology when we think of something as a password’ (Revett et al., 2010,
p. 74). Instead of using a password humans may be able to authenticate by simply
thinking of a specific thing, or password (Revett et al., 2010). The use of the EEG
has provided promising results in classification accuracy approaching physiological-
based approaches such as fingerprinting (Revett et al., 2010). Armstrong et al.
(2015) were able to label ERPs as belonging to specific individuals with an accuracy
rate that ranges from 82 to 97 per cent, and found to be stable over time, using a
technique that requires three electrodes to be placed on the scalp.

Application and issues


As ERPs can be used as cognitive passwords, there are possible applications in user
identification, however cognitive biometrics are not currently widely adopted.
Cognitive biometrics are considered to be highly resistant to circumvention
(Revett et al., 2010). Research supports the proposition that an individual’s brain-
wave patterns are unique and ‘nearly impossible to forge or duplicate as the neural
activity of people are distinctive even when they think about the same thing’
(Bajwa & Dantu, 2016, p. 95).
The main disadvantage of cognitive biometrics is that they require the use of
sensitive EEG equipment, including electrodes and conductive gels, and, for this
reason, their use in everyday settings may not be realistic (Revett et al., 2010). The
use of EEG equipment is currently prohibitively expensive on a wide scale, but this
may change in accordance with the development of new technology (Bajwa &
Dantu, 2016).

References
Abaza, A., Ross, A., Hebert, C., Harrison, M. & Nixon, M. (2013). A survey on ear biometrics.
ACM Computing Surveys 45(2), 1.
Ali, A. & Islam, M. (2013). A biometric based 3D ear recognition system combining local
and holistic features. International Journal of Modern Education and Computer Science 11, 36.
Altaf, M., Butko, T. & Juang, B. (2015). Acoustic gaits: Gait analysis with footstep sounds.
IEEE Transactions on Biomedical Engineering 62(8), 2001.
Anwar, A., Ghany, K. & Elmahdy, H. (2015). Human ear recognition using geometrical
features extraction. Procedia Computer Science 65, 529.
Armstrong, B., Ruiz-Blondet, M., Kahalifian, N., Kurtz, K., Jun, Z. & Laszlo, S. (2015).
Brainprint: Assessing the uniqueness, collectability, and permanence of a novel method
for ERP biometrics. Neurocomputing 166, 59.
Australian Taxation Office. (2014, 8 September). ATO launches voice authentication:
Australians can save time on the phone to the ATO. Retrieved from https://www.ato.
gov.au/media-centre/media-releases/ato-launches-voice-authentication
6
BIOMETRICS IN CRIMINAL TRIALS

Introduction
This chapter explores the ways in which criminal courts have dealt with the
emergence of biometrics as a source of evidence. The main purpose in considering
this kind of evidence in criminal trials and appeals is to establish the identity of
either the offender or the victim. As discussed in previous chapters, fingerprint and
DNA analysis have long been admitted as evidence aiding identification, and these
have been supplemented more recently by facial and body mapping, voice analysis
and other types of biometrics. However, each of these has faced challenges to
acceptance as a form of evidence, based mainly on concerns about their reliability,
regulatory control and the manner of their presentation in legal proceedings. This
chapter provides an insight into the trend of accepting biometric identification as a
source of evidence, but with some judicial reservations about the application of
particular kinds of biometrics in the criminal justice system. These concerns are
largely based on whether certain forms of identification have achieved the degree
of scientific reliability that is required for legal admissibility.

Identification evidence
Before discussing the main forms of biometrics that courts deal with, it is useful to
consider the context of such evidence.1 In criminal trials, the prosecution is

1 In this chapter, the focus is on criminal proceedings. However, biometrics can also play
a role in civil or administrative proceedings. An example is the resolution of paternity
claims in family law: see, for example, the case of Magill v Magill [2006] HCA 51; (2006)
231 ALR 277; (2006) 81 ALJR 254 (9 November 2006) in which DNA testing after
the end of a marriage revealed that two children were not biological offspring of the
father, leading to tortious claims of deceit. Biometrics are also used in migration cases to
86 Biometrics in criminal trials

required to prove its case against the defendant (also referred to as the accused)
beyond reasonable doubt, unless there is a guilty plea and the matter proceeds
directly to sentencing. Where the prosecution is required to prove the identity of
the person who allegedly committed the crime, there will usually be some form of
‘identification evidence’ adduced. An example of this form of evidence is the
following:2

“identification evidence” means evidence that is:


(a) an assertion by a person to the effect that a defendant was, or resembles
(visually, aurally or otherwise) a person who was, present at or near a
place where:
(i) the offence for which the defendant is being prosecuted was
committed; or
(ii) an act connected to that offence was done;
at or about the time at which the offence was committed or the act was done,
being an assertion that is based wholly or partly on what the person making
the assertion saw, heard or otherwise perceived at that place and time; or
(b) a report (whether oral or in writing) of such an assertion.

In most cases, the assertion of identity will be made by a person who was an ‘eye
witness’ at the scene of a crime, and has made such a report to police or is able to
do so in court testimony.3 The provisions dealing with identification evidence
impose as a general pre-condition to the admissibility of such evidence that
the defendant participated in an ‘identification parade’ or, as it is also known, a
‘police line-up’.4 This requirement is due to the fact that eye witness identification
has historically been seen as unreliable and has led in some instances to wrongful
convictions, so that more controlled and supervised identification procedures are
preferred.5

help in establishing or verifying identity: see, for example, SZSZM v Secretary, Depart-
ment of Immigration and Border Protection [2017] FCA 458 (27 April 2017).
2 This example is from the uniform evidence law (UEL) that operates in several Australian
jurisdictions.
3 The expression ‘visually, aurally or otherwise’ allows other senses to form the basis of a
witness identification, such as recognising a distinctive voice, accent, posture or gait. A
recent case in which a ‘voice identification parade’ was used is Miller v R [2015]
NSWCCA 206 (3 August 2015).
4 Section 113 provides that the identification evidence provisions only apply in criminal
proceedings. Sections 114 and 115 refer to the use of an ‘identification parade’ but do
not define the term. Section 114 deals with ‘visual identification evidence’ while s115
deals with ‘picture identification evidence’. The conduct of identification parades is
governed by other legislation such as the Crimes Act 1914 (Cth), ss3ZM and 3ZN.
Section 116 imposes requirements for warnings to the jury in relation to identification
evidence.
5 The High Court of Australia summarised the problems with eye witness identification
almost a century ago, and noted requirements for identification procedures that are still
Biometrics in criminal trials 87

Biometrics provide an alternative to traditional eye witness identification. Not


quite falling within the definition of ‘identification evidence’ set out above, bio-
metric forms of identification may nonetheless be admissible as a type of circum-
stantial evidence.6 Almost invariably, such evidence is presented in the form of
expert opinion evidence, either through a forensic analyst report or by means of
witness testimony from the specialist who conducted biometric analysis.
General requirements for admissibility of biometric identification are that the
evidence must be:

a relevant in the proceeding, meaning that it has the capacity to help resolve
factual issues in the trial, such as the identity of an offender;7
b based on specialised knowledge, meaning that it must be presented by an
expert who has previous training, study or experience in the applicable field of
expertise;8
c not unfairly prejudicial, in which case it may be ruled inadmissible by the
judge.9

Techniques of biometric identification that have been considered by courts include


facial and body mapping, fingerprinting and DNA matching. These are discussed in

followed today: Davies (and Cody) v The King [1937] HCA 27; (1937) 57 CLR 170; see
also Alexander v The Queen [1981] HCA 17; (1981) 145 CLR 395.
6 Biometric identification such as a fingerprint or DNA match is not classed as ‘identifi-
cation evidence’ because it is usually not based on what ‘the person making the assertion
saw, heard or otherwise perceived at that place and time’ but on later forensic analysis
by a person who was not a witness to the events in question: see Australian Law
Reform Commission, Uniform Evidence Law (ALRC 102), [13.25]. This means that Part
3.9 does not apply, and biometric evidence is treated as a form of circumstantial evi-
dence; for example, the judge in R v Pfennig (No. 2) [2016] SASC 171 (11 November
2016), [31] stated: ‘I point out, however, that the DNA evidence is not direct evidence
going to the guilt of the accused. I treat it as circumstantial evidence to be considered
alongside all of the other evidence in the case’.
7 Section 55(1) of the UEL legislation provides: ‘The evidence that is relevant in a pro-
ceeding is evidence that, if it were accepted, could rationally affect (directly or indir-
ectly) the assessment of the probability of the existence of a fact in issue in the
proceeding’. Relevant evidence is admissible subject to other provisions: s56.
8 Section 79(1) provides an exception from the exclusionary opinion rule in s76 as fol-
lows: ‘If a person has specialised knowledge based on the person’s training, study or
experience, the opinion rule does not apply to evidence of an opinion of that person
that is wholly or substantially based on that knowledge’. An expert may give this evi-
dence in the form of affidavit under s177, or may be called to give the evidence
through testimony.
9 In particular, s137 provides: ‘In a criminal proceeding, the court must refuse to admit
evidence adduced by the prosecutor if its probative value is outweighed by the danger
of unfair prejudice to the defendant’. Additionally, s138(1) provides: ‘Evidence that was
obtained: (a) improperly or in contravention of an Australian law; or (b) in consequence
of an impropriety or of a contravention of an Australian law; is not to be admitted
unless the desirability of admitting the evidence outweighs the undesirability of admit-
ting evidence that has been obtained in the way in which the evidence was obtained’.
Thus, investigative practices may also affect admissibility.
88 Biometrics in criminal trials

turn below. However, as discussed in Chapter 5, new techniques are always evol-
ving and this list is not exhaustive.

Facial and body mapping


The biometrics of facial mapping and body mapping primarily involve the com-
parison of still images in order to determine likely identity, typically between an
image taken from a crime scene and a comparable image depicting a criminal
defendant (discussed in Chapter 4). For example, photographs developed from a
CCTV camera recording can be compared with photographs of the defendant, or
even directly with the defendant’s appearance in the courtroom. This kind of visual
comparison can in some instances be made by a jury without assistance, and indeed
courts have sometimes stressed that to allow police or other witnesses to offer their
opinions of similarity as evidence can usurp the proper role of the jury.10 However,
facial and body mapping techniques usually involve some level of technical skill
and measurement that goes beyond what a lay jury can do, thus making it properly
the subject of expert evidence. These techniques are closest to being a scientific
analogue of traditional eye witness identification, which is notoriously susceptible
to inaccuracy. However, the scientific reliability of facial and body mapping has
been questioned, including in criminal proceedings.

Significant cases
A noteworthy early case involving this form of evidence that was widely publicised
in the United Kingdom and Australia was the murder trial arising from the dis-
appearance in the Northern Territory of British tourist Peter Falconio, whose body
was never found (Gans, 2007c). Part of the evidence was a photographic image
developed from security camera footage at a highway truck stop, which was com-
pared by a facial mapping expert called by the prosecution with images of the
defendant. This evidence was allowed by the trial judge in the case, along with
DNA evidence linking the defendant to the crime:11

10 Smith v The Queen [2001] HCA 50; (2001) 206 CLR 650, in which a High Court
majority observed: ‘Because the witness’s assertion of identity was founded on material
no different from the material available to the jury from its own observation, the wit-
ness’s assertion that he recognised the appellant is not evidence that could rationally
affect the assessment by the jury of the question we have identified. The fact that
someone else has reached a conclusion about the identity of the accused and the person
in the picture does not provide any logical basis for affecting the jury’s assessment of the
probability of the existence of that fact when the conclusion is based only on material
that is not different in any substantial way from what is available to the jury.’
11 The Queen v Murdoch [2005] NTSC 78 (15 December 2005), (Martin CJ), [207]-[208].
DNA aspects of the case are discussed in two articles by Jeremy Gans, ‘The Peter Fal-
conio Investigation: Needles, Hay and DNA’ (2007c) and ‘Catching Bradley Murdoch:
Tweezers, Pitchforks and the Limits of DNA Sampling’ (2007a).
Biometrics in criminal trials 89

The image of the person entering the shop at the truck shop taken from the
security film is far from clear. This is not a case of comparing clear photo-
graphs where it could be said with considerable force that the jury could reach
its own conclusion without help. In addition, there is evidence that the
accused has changed his appearance since July 2001. The comparison between
the image from the security film and photographs of the accused is far from
straightforward and, in my opinion, the jury would be assisted by the evidence
of Dr Sutisno.
Further, in my view, it is not appropriate to limit the assistance to merely
identifying the relevant characteristics. When regard is had to the nature and
detail of the characteristics and the methodology employed by Dr Sutisno, it is
readily apparent that her knowledge and expertise in the area of anatomy give
Dr Sutisno a significant advantage in the assessment of the significance of the
features of comparison both individually and in their combination. Dr Sutisno
possesses scientific knowledge, expertise and experience outside the ordinary
knowledge, expertise and experience of the jury. This is not a case in which
the jury, having been informed of the relevant features, would not be assisted
by the expert evidence of Dr Sutisno as to her opinion of the significance of
the features individually and in their combination.

The court was also prepared to accept body mapping, a more recent technique
involving superimposition of images, as an extension of facial mapping:12

Body mapping has received limited attention within the scientific community.
For that reason it may be regarded as a new technique, but as Dr Sutisno
explained it is merely an extension of the well recognised and accepted prin-
ciples of facial mapping to the remainder of the body. I am satisfied that the
technique has ‘a sufficient scientific basis to render results arrived at by that
means part of a field of knowledge which is a proper subject of expert
evidence’.

However, on appeal it was held that the facial and body mapping evidence should
not have been allowed beyond the expert assisting the jury to ascertain physical
similarities, rather than in the expert reaching conclusions about identity:13

This Court has found that the technique employed by Dr Sutisno did not
have a sufficient scientific basis to render the results arrived at by that means
part of a field of knowledge which is a proper subject of expert evidence.
However the evidence given by Dr Sutisno was capable of assisting the jury in

12 The Queen v Murdoch [2005] NTSC 78 (15 December 2005), [110].


13 Murdoch v The Queen [2007] NTCCA 1 (10 January 2007), [300]. Despite this ruling,
however, the conviction was upheld as it was amply supported by other evidence. A
special leave application to the High Court was unsuccessful: Murdoch v The Queen
[2007] HCATrans 321 (21 June 2007).
90 Biometrics in criminal trials

terms of similarities between the person depicted in the truck stop footage and
the appellant. It was evidence that related to, and was admissible as, demon-
strating similarities but was not admissible as to positive identity. Dr Sutisno
was not qualified to give evidence, as she did, based on “face and body map-
ping” as to whether the two men were, indeed, the same man. Her evidence
in this regard should not have been received.

Facial and body mapping evidence can therefore be admitted in criminal proceed-
ings, but its use must be managed so as not to usurp the function of the jury as
decider of the facts. Two other cases decided at around the same time reached a
similar conclusion, though with some additional differentiation between facial and
body mapping. In the case of Tang, the Court of Criminal Appeal considered the
expert’s use of biometric methods:14

Dr Sutisno compared measurements and dimensions of faces (photo-


anthropometry) and individual facial and body features (morphological
analysis). She magnified photographs of the offender to the same size as the
suspect, changed the opacity of one before putting it on top of the other,
in order to see whether the features aligned or one could be overlayed
over the other (photograph superimposition). Furthermore, she identified
distinctive individual characteristic and habits, which she called “unique
identifiers”.
Dr Sutisno used photo-anthropometry as a first step in facial and body
mapping, but did not rely solely on the findings of this procedure because of
the possibility of two or more people having the same dimensions. She regards
morphological analysis as more accurate than photo-anthropometry, because it
compares individual facial and body features and takes into account distinctive
characteristic habits of the individual. She asserts that morphological analysis
provides results sufficient to show whether two sets of photographs [of] people
were of the same person or not.

The court went on to consider similarities between these techniques and other
biometrics such as fingerprint comparison, a more established method of forensic
identification. By analogy, it was accepted that expert evidence of similarities,
derived from comparison of facial or body photographs could also provide assis-
tance to the jury, including acquired or ‘ad hoc’ expertise:15

14 R v Hien Puoc Tang [2006] NSWCCA 167 (24 May 2006), [19]-[20] (Spigelman CJ).
15 R v Hien Puoc Tang [2006] NSWCCA 167 (24 May 2006), [120] (Spigelman CJ). The
concept of ad hoc expertise in relation to voice identification has been applied in cases
such as Butera v Director of Public Prosecutions (Vic) [1987] HCA 58; (1987) 164 CLR 180;
R v Leung and Wong [1999] NSWCCA 287; and more recently, Morgan v R [2016]
NSWCCA 25 (26 February 2016); and Nasrallah v R; R v Nasrallah [2015] NSWCCA
188 (17 July 2015).
Biometrics in criminal trials 91

The process of identification and magnification of stills from the videotape was
a process that had to be conducted by Dr Sutisno out of court. Furthermore,
the quality of the photographs derived from the videotape was such that the
comparison of those stills with the photographs of the Appellant could not be
left for the jury to undertake for itself. The identification of points of similarity
by Dr Sutisno was based on her skill and training, particularly with respect to
facial anatomy. It was also based on her experience with conducting such
comparisons on a number of other occasions. Indeed, it could be supported by
the experience gained with respect to the videotape itself through the course
of multiple viewing, detailed selection, identification and magnification of
images. By this process she had become what is sometimes referred to as an
“ad hoc expert”.

However, in order for the opinions of identity offered by the expert in this case to
be admissible, compliance with the specialised knowledge requirements of evi-
dence law had to be demonstrated. The court ruled that there was an inadequate
connection between the body mapping techniques being applied and the ‘training,
study or experience’ of the expert, and thus the opinions on offer did not pass the
requirements of the relevant evidence law:16

In the case of the Appellant the relevant evidence about posture was expressed
in terms of “upright posture of the upper torso” or similar words. The only
links to any form of “training, study or experience” was the witnesses’ study of
anatomy and some experience, entirely unspecified in terms of quality or
extent, in comparing photographs for the purpose of comparing “posture”.
The evidence in this trial did not disclose, and did not permit a finding, that
Dr Sutisno’s evidence was based on a study of anatomy. That evidence barely,
if at all, rose above a subjective belief and it did not, in my opinion, manifest
anything of a “specialised” character. It was not, in my opinion, shown to be
“specialised knowledge” within the meaning of s79.

In the Jung case a month later, a judge was again required to rule on the admissi-
bility of Dr Sutisno’s facial and body mapping analysis in a murder trial. In this
case, the defence called its own expert witnesses, who cast doubt on the claims
made for the techniques, referring to the quality of the photographs used. None-
theless, the judge ruled the evidence admissible, with questions of the quality of the
analysis going to its weight rather than admissibility:17

However adequate or inadequate the photographic materials utilised by


Sutisno for the purpose of her analysis, the evidence on the voir dire does not

16 R v Hien Puoc Tang [2006] NSWCCA 167 (24 May 2006), [140] (Spigelman CJ,
Simpson and Adams JJ agreeing).
17 R v Jung [2006] NSWSC 658 (29 June 2006), [62]-[64].
92 Biometrics in criminal trials

establish that she has failed to disclose the factual material she has utilised (the
photographic images), the nature of the methodology that she has employed
and the type of analysis described in her reports (morphological analysis). I
have carefully reviewed the reports and her evidence in order to determine
whether it may properly be said that, having regard to the specific principles
governing admissibility of expert evidence as identified by Heydon, JA in
Makita … Dr. Sutisno’s evidence complies with the requirements for
admissibility.
Insofar as she has identified the relevant factual matters that she has taken
into account (the particular photographic images) the particular facial features
which she maintains are examinable by reference to such images and the
nature of the methodology employed by her, the tests of admissibility in those
respects are satisfied. The question of the weight, including the reliability, of
the opinion is, of course, a quite different matter and it is anticipated at trial
that attention will be given to the quality of the photographic images, their
alleged deficiencies and the significance that arises from those matters.

Another case to examine the scientific reliability of the technique of body mapping
was based on a comparison, of both moving and still images of an offender and the
defendant, by an expert in the field of anthropology and comparative anatomy. In
overturning the conviction in this case on appeal, the court expressed its concern
about the ‘lack of research into the validity, reliability and error rate of the pro-
cess’.18 Thus, the scientific reliability of body mapping has not been definitively
resolved.
Three years later in 2014, similar evidence was considered in the Honeysett case
(discussed in Buckland, 2014; Edmond & San Roque, 2014). The opinion of the
expert, based on body mapping analysis in an armed robbery case, identified the
appellant:19

He is an adult male of ectomorphic (thin, ‘skinny’) body build. His shoulders


are approximately the same width as his hips. His body height is medium
compared to other persons, and to familiar objects (eg doorways) visible in the
images from the [offence]. He carries himself very straight, so that his hips are
standing forward while his back has a very clearly visible lumbar lordosis
(the small of his back is bent forward) overhung by the shoulder area.
Although the offender covers his head and face with a cloth (what looks like a
T-shirt) … the knitted fabric is elastic and adheres closely to the vault of his
skull (= braincase). This shows that his hair is short and does not distort the
layout of the fabric. The shape of the head is clearly dolichocephalic (= long
head, elongated oval when viewed from the top) as opposed to brachycephalic
(= short head, nearly spherical). The offender is right-handed in his actions …

18 Morgan v R [2011] NSWCCA 257 (1 December 2011), [138] (Hidden J).


19 Honeysett v The Queen [2014] HCA 29 (13 August 2014), [14]-[17].
Biometrics in criminal trials 93

Although most of the body of the offender is covered by clothing, head wrap
and gloves, an area of naked skin above his wrist (between the glove and the
sleeve) in images … is visible and can be compared to the skin colour of a
female hotel employee on the same images.
[The appellant] is an adult male of ectomorphic (= slim) body buil[d]. His
hips and shoulders are of approximately the same width. His stance is very
straight with well marked lumbar lordosis and pelvis shifted forward. His
skull vault is dolichocephalic when viewed from the top. Comparison of
lateral (side) and front views of his head also indicates the head … is long but
narrow. His skin is dark, darker than that of persons of European extraction,
but not ‘black’ … He is right-handed – uses his right hand to sign
documents.

The expert concluded that there was a ‘high degree of anatomical similarity’
between the offender and the appellant, and this opinion was ‘strengthened by the
fact that he was unable to discern any anatomical dissimilarity between the two
individuals’. This evidence was allowed to be heard by the jury, which convicted
the appellant. On appeal, the court held that the evidence fell within the ‘training,
study or experience’, of the expert witness.20 The appeal was dismissed and the
matter went before a High Court for further consideration.
The High Court unanimously agreed that, whatever the scientific merits of body
mapping as a reliable and validated field of study, the expert’s opinion in this case
was simply not sufficiently based on his expertise in anatomy:21

Professor Henneberg’s opinion was not based on his undoubted knowledge of


anatomy. Professor Henneberg’s knowledge as an anatomist, that the human
population includes individuals who have oval shaped heads and individuals
who have round shaped heads (when viewed from above), did not form the
basis of his conclusion that Offender One and the appellant each have oval
shaped heads. That conclusion was based on Professor Henneberg’s subjective
impression of what he saw when he looked at the images. This observation
applies to the evidence of each of the characteristics of which Professor
Henneberg gave evidence.

Professor Henneberg’s evidence gave the unwarranted appearance of science
to the prosecution case that the appellant and Offender One share a number of
physical characteristics. Among other things, the use of technical terms to
describe those characteristics – Offender One and the appellant are both
ectomorphic – was apt to suggest the existence of more telling similarity than

20 Honeysett v R [2013] NSWCCA 135 (5 June 2013) (Macfarlan JA, Campbell J and Barr
AJ agreeing).
21 Honeysett v The Queen [2014] HCA 29 (13 August 2014), [43]-[46] (French CJ, Kiefel,
Bell, Gageler and Keane JJ). The appellant’s conviction was ordered to be quashed and a
new trial allowed.
94 Biometrics in criminal trials

to observe that each appeared to be skinny. Professor Henneberg’s opinion


was not based wholly or substantially on his specialised knowledge within s
79(1). It was an error of law to admit the evidence.

Facial mapping has been accepted as a form of biometric evidence, though with
some reservations about the strength of expert opinions in particular cases. Body
mapping has not been definitively accepted as scientifically reliable, and the few
cases in which it has been considered in depth have cast doubt on the reasoning
processes involved.
The courts’ treatment of facial and body mapping as fields of ‘specialised
knowledge’ has been criticised. In relation to the Honeysett case, Edmond and San
Roque (2014, p. 324) have argued:

We contend that too much weak, speculative and unreliable opinion is


allowed into criminal proceedings, particularly in New South Wales. The
problems with the contested image comparison evidence in Honeysett
are representative of widespread problems with forensic science evidence
more broadly. Following an extended review of the forensic sciences,
involving submissions and hearings, a committee of the National Research
Council of the United States National Academy of Sciences concluded
that:
With the exception of nuclear DNA analysis … no forensic method has
been rigorously shown to have the capacity to consistently, and with a high
degree of certainty, demonstrate a connection between evidence and a specific
individual or source. … The simple reality is that the interpretation of forensic
evidence is not always based on scientific studies to determine its validity. This
is a serious problem.

Nonetheless, there is merit in challenging the scientific basis of new forensic tech-
niques such as facial and body mapping, in order to ensure that the best evidence is
presented before the courts. While this may not result in exclusion of expert opi-
nion evidence, it may affect the weight it is given in the overall context of criminal
proceedings.

Fingerprinting
As discussed in Chapter 2, fingerprinting has been routinely used by police in
criminal investigations since the beginning of the 1900s (Coyle, Field & Wender-
oth, 2009; Gans, 2011). Crime scene examiners may find ‘latent’ fingerprints or
palm prints on objects, which can be visualised using laboratory processes. The
prints can then be compared with those taken from a suspect or by searching for a
match against a database of prints. This can be done in an automated way, for
example, using the IDENT1 national fingerprint database that operates in the
United Kingdom.
Biometrics in criminal trials 95

Courts around the world have routinely admitted fingerprint evidence in crim-
inal proceedings for over a century.22 Typically, the expert witness in such cases is
an investigating police officer with specialised knowledge of fingerprinting techni-
ques, or a forensic analyst, who was involved in the fingerprint collection and
comparison process used in the investigation.23

Collection and comparison


The collection of fingerprints at a crime scene and their comparison to those
taken from a suspect or found on a forensic database are regulated by forensic
procedures legislation. The following criminal procedure legislation provides an
example:24

3ZJ Taking fingerprints, recordings, samples of handwriting or photographs


(1) In this section and in sections 3ZK and 3ZL:
“identification material”, in relation to a person, means prints of the
person’s hands, fingers, feet or toes, recordings of the person’s voice,
samples of the person’s handwriting or photographs (including video
recordings) of the person, but does not include tape recordings made for
the purposes of section 23U or 23V.
(2) A constable must not:
(a) take identification material from a person who is in lawful custody
in respect of an offence except in accordance with this section; or
(b) require any other person to submit to the taking of identification
material, but nothing in this paragraph prevents such a person con-
senting to the taking of identification material.
(3) If a person is in lawful custody in respect of an offence, a constable who
is of the rank of sergeant or higher or who is for the time being in charge
of a police station may take identification material from the person, or
cause identification material from the person to be taken, if:

22 In a 1912 case, it was observed: ‘Signatures have been accepted as evidence of identity
as long as they have been used. The fact of the individuality of the corrugations of the
skin on the fingers of the human hand is now so generally recognised as to require very
little, if any, evidence of it, although it seems to be still the practice to offer some expert
evidence on the point. A finger print is therefore in reality an unforgeable signature’:
Parker v R [1912] HCA 29; (1912) 14 CLR 681, Griffith CJ at 683, cited in R v Mitchell
[1997] ACTSC 93; (1997) 130 ACTR 48 (18 November 1997).
23 See, for example, the cases of R v Regan [2014] NSWDC 118 (16 June 2014); and DPP
v Watts [2016] VCC 1726 (23 November 2016).
24 Part ID the Crimes Act 1914 (Cth). Taking a fingerprint is classified as a ‘non-intimate
forensic procedure’ which can be carried out with consent or by order of a senior police
officer or magistrate on person in custody where other conditions are satisfied.
96 Biometrics in criminal trials

(a) the person consents in writing; or


(b) the constable believes on reasonable grounds that it is necessary to
do so to:
(i) establish who the person is; or
(ii) identify the person as the person who committed the offence; or
(iii) provide evidence of, or relating to, the offence; or
(ba) both of the following apply:
(i) the identification material taken, or caused to be taken, is finger-
prints or photographs (including video recordings) of the person;
(ii) the offence is punishable by imprisonment for a period of 12
months or more; or
(c) the constable suspects on reasonable grounds that the person has
committed another offence and the identification material is to be
taken for the purpose of identifying the person as the person who
committed the other offence or of providing evidence of, or relating
to, the other offence.
(4) A constable may use such force as is necessary and reasonable in the cir-
cumstances to take identification material from a person under this sec-
tion ….25

Police taking fingerprints under this provision may do so with or without the
consent of the suspect. However, if there is a failure of compliance with the
requirements of this section, or others that relate to the treatment of persons in
custody and the taking of forensic samples, the defence is entitled to challenge the
admissibility of the evidence based on the manner in which it was obtained. An
example is a case involving the North Korean transport ship Pong Su, in which
fingerprints of a suspect were taken by police officers. The defence argued that the
circumstances in which the fingerprints were taken were oppressive in that the
suspect ‘had been exposed to the elements for two days prior to being taken into
custody during which time he had no access to food and limited access to water
and was found by police to be tired’. It was also submitted that the fingerprints had
been illegally obtained due to non-compliance with the above provision (s3ZJ).
The judge, however, found that the police officers had acted reasonably and in good
faith, and that ‘[a]t the most any breach was a failure to comply with a procedural
requirement’ that did not require exclusion of the evidence.26

25 Subsections dealing with persons under the age of 18 years are not reproduced here.
Note that more restrictive conditions may apply to minors: R v SA, DD and ES [2011]
NSWCCA 60 (28 March 2011); Hibble v B [2012] TASSC 59 (20 September 2012).
See also Watkins v The State of Victoria & Ors [2010] VSCA 138 (11 June 2010), which
considered whether police had used excessive force in taking fingerprints from a suspect.
26 Pong Su (No. 2) [2004] VSC 492 (6 December 2004), per Kellam J at [31].
Biometrics in criminal trials 97

The process of comparing fingerprints may occur manually or by automated


means via a database. The proposition that each individual’s fingerprints are unique
appears to be a basic assumption behind forensic uses of fingerprint matching, and
has not been displaced by scientific advancement to date. Fingerprint comparison
differs from other forms of biometrics, such as DNA matching, in that it does not
rely on match probabilities. This means its presentation as evidence of identity is
considerably simpler. However, there is still a degree of judgment required in
making visual comparisons. Some critics point out that this inevitably introduces a
capacity for error (Edmond, 2015).
The following extract provides an example of the use of fingerprint comparison
adduced by the prosecution in a criminal case involving burglary:27

The real strength of the Crown case lay in the fingerprint evidence. Ms Lam, a
crime scene investigator, attended the scene at about 4.45 pm. She found a
number of fingerprints, including some left on the television set, and both
photographed them and took tape lifts from them. Mr Comber, a fingerprint
expert, gave evidence that he had compared a fingerprint lifted from the
television set with a fingerprint identified as that of the accused on the
National Automated Fingerprint Identification System (‘NAFIS’). He found
that the two prints had both been made by the middle finger of the same left
hand. There was no challenge to Mr Comber’s methodology or as to the
accuracy of this conclusion. I found him to be an impressive witness and
accepted his evidence. It was not suggested that the fingerprint obtained from
NAFIS had been incorrectly attributed to the accused and I was satisfied
beyond reasonable doubt that the print had been left on the television set
when touched by the accused.

The probative value of a fingerprint or palm print match must be assessed in the
context of all other evidence in a criminal trial, and it will be of greatest sig-
nificance if there is no apparently innocent explanation for how it came to be left
at a crime scene.28 This kind of evidence therefore operates as part of a circum-
stantial case against the defendant.
The following example of a fingerprint comparison report tendered during
police testimony relates to a match of prints left during a burglary, and a young
defendant identified as ‘JP’:29

27 R v Millard [2006] ACTSC 56 (6 June 2006), [15]. See also R v Fitzgerald [2005] SADC
118 (25 August 2005).
28 An unusual case where the defence sought to have fingerprint evidence excluded
entirely was an appeal in which the defence alleged that police had forged the defen-
dant’s fingerprint on a cheque: Mickelberg v The Queen [2004] WASCA 145 (2 July
2004).
29 JP v Director of Public Prosecutions (NSW) [2015] NSWSC 1669 (11 November 2015),
[7]. The police witness had prepared a ‘Certificate of Expert Evidence’ under s 177 of
the UEL legislation, stating his qualifications as an examiner and presenting his
conclusions.
98 Biometrics in criminal trials

During the course of my daily duties, I carefully compared all the finger and
palm impressions appearing in the photographs bearing Forensic Case Number
2819499 with the finger and palm impressions of [JP] born … as appearing on
the fingerprint form by placing those photographs one at a time side by side
with those finger and palm impressions and referring backwards and forwards
between them. I compared pattern type and ridge flow, friction ridge char-
acteristics, their relative positions to each other and the number of intervening
ridges between those characteristics, that is the finger or palm prints appearing
in the photographs bearing Forensic Case Number 2819499 against the finger
or palm impressions of [JP] born … as appearing on the fingerprint form. The
comparison process was carried out systematically and sequentially until all
available friction ridge detail had been compared between the finger and palm
impressions appearing in the photographs bearing Forensic case Number
2819499 and the finger and palm impressions of [JP] born … as appearing on
the fingerprint form.

Based wholly or substantially on my specialised knowledge and belief I am of


the following opinion:
 Graph W1 is identified to another person
 Graph W2 is identified to another person
 Graph W3 is identified to the Left Thumb of [JP] …

That is to say the impressions appearing in the photographs bearing


Forensic case Number 2819499 and labelled W3 are made by one of the
same [JP] born …

Although match probabilities are not involved in fingerprint comparisons, the


process of comparing two prints and arriving at a conclusion does involve the
identification of numerous points of comparison, sometimes referred to as ‘char-
acteristic points’.30 The more points that are compared, and the more similarity
between the compared points, the more persuasive will be any conclusion drawn
regarding identity. Although it is not strictly necessary for an expert witness to
explicitly describe all of the details of the matching process in court testimony,
cross-examination may be used by the defence to test the basis for concluding that
fingerprints are the same.
The requirements for expert evidence mean that a witness with purported spe-
cialised knowledge should be able to explain how this provides a sound basis for

30 JP v Director of Public Prosecutions (NSW) [2015] NSWSC 1669 (11 November 2015),
[36] referring to ‘the case of Bennett v Police [2005] SASC 167 (4 May 2005) in which
“more than 20 characteristics … were common and identical”. In JP, the police witness
claimed to have examined 35 comparison points but did not specify how many were
considered to be a match with the defendant’s prints, as opposed to the overall con-
clusion of identity.
Biometrics in criminal trials 99

the opinions arrived at in the case.31 Where there are gaps in the explanations
offered by the prosecution’s experts, defence counsel may seek to have the opinion
evidence excluded entirely, or ask for the jury to be cautioned in giving it weight
as evidence.32 It is also possible at the appeal stage for an appellant to argue that the
fingerprint or other biometric evidence was not properly summarised by the judge
in instructing the jury.33
It may also be possible to challenge inferences drawn from physical evidence, such
as the estimated age of fingerprints. The time at which fingerprints were deposited
through contact with an object may be of importance in assessing its relevance in a
particular case. This will ordinarily involve additional forensic evidence.34
Another issue that judges must consider carefully is that a jury hearing that the
defendant’s fingerprints were matched to a crime scene using a police database may
infer that the defendant has a criminal history, which explains the inclusion on the
database. In such cases, the defence may seek to exclude evidence as unfairly pre-
judicial, or seek that the jury be discharged. A remedy is for the judge to warn the
jury against making an adverse inference of this kind.35

DNA identification
Identification using DNA is generally regarded as more discriminating than any other
biometric method. However, because it relies on the generation of a DNA profile from
a biological sample, it can be susceptible to court challenges, for example, on the basis
of sample integrity and the possibility of transference. Further complexities include the
scientific processes and statistical interpretations involved (Gans & Urbas, 2002).
Because DNA profiles are stored in increasingly large databases, new matching techni-
ques allowing ‘cold hits’ and partial match searches are now used routinely (Smith &
Mann, 2015). These issues will be discussed in turn, drawing on illustrative cases.

31 Leading authorities on specialized knowledge under UEL s79(1) are Makita (Australia)
Pty Ltd v Sprowles [2001] NSWCA 305 (14 September 2001); HG v The Queen [1999]
HCA 2; 197 CLR 414; and Honeysett v The Queen [2014] HCA 29; 253 CLR 122.
32 In JP v Director of Public Prosecutions (NSW) [2015] NSWSC 1669 (11 November 2015),
defence arguments seeking exclusion of the fingerprint comparison evidence on the
basis that the expert had insufficiently explained his reasoning process were unsuccessful.
The judge noted at [43] that ‘with fingerprint evidence it will often be the case that
“little explicit articulation or amplification” of how the stated methodology warrants the
conclusion that two fingerprints are identical will be required before it can be con-
cluded that the second condition of admissibility under s 79(1) has been satisfied’
(emphasis original), citing Dasreef Pty Ltd v Hawchar [2011] HCA 21; 243 CLR 588.
33 Ghebrat v The Queen [2011] VSCA 299 (12 October 2011).
34 See R v SMR [2002] NSWCCA 258 (1 July 2002).
35 See, for example, the defence submission in R v Ahola (No. 6) [2013] NSWSC 703 (14
May 2013), [3]: ‘The submission is that the jury would inevitably infer from the [police
officer’s testimony] that the accused is a person with a criminal record whose finger-
prints were held by the police, prior to them being taken from him with regard to this
matter. It is that inference that forms the foundation for the application of the discharge
of the whole jury’. The submission was unsuccessful in this case.
100 Biometrics in criminal trials

Collection and analysis


Compliance with forensic procedures legislation is a general pre-condition to the
admissibility of DNA evidence. This applies to the taking of forensic material such
as hair samples or buccal swabs from suspects, arrested persons and others.36 Alter-
natively, the defence can challenge the integrity of samples collected and stored by
police, on the basis that ‘chain of custody’ requirements have not been observed.
This can support a hypothesis of ‘contamination’ (Edwards, 2006; Findlay & Grix,
2003).
As was discussed in Chapter 3, one of the most striking contamination cases
worldwide is that of Farah Jama, who was wrongly convicted of rape on the basis
of a DNA match. This was subsequently found to have been most likely a result of
accidental contamination of the alleged rape evidence with a sample of Jama’s
DNA which was in the same forensic laboratory having been taken the day before
during an unrelated investigation (Rayment, 2010; Cashman & Henning, 2012;
Krone, 2012). Cases such as this reinforce the need for compliance with forensic
collection, storage and analysis (‘chain of custody’) protocols, as errors can be very
hard to identify and correct at trial.
In addition to accidental contamination, it is possible for DNA evidence to be
manipulated by deliberate interference. In another case, defence lawyers suggested
that the presence of an assault victim’s blood on the clothing of the defendant may
have been the result of either contamination or deliberate interference in a police
facility, as one of its experts discerned ‘post-transfusion’ artefacts in a tested clothing
sample, indicating that the blood involved may have come from the victim after
police had taken a blood sample in the hospital rather than as a result of the alleged
attack (Haesler, 2006).37
It should also be recalled (as discussed in Chapter 3) that a DNA match will
ordinarily only have legal significance if there is no innocent explanation for the
DNA being found where it was. Finding the defendant’s DNA at the crime scene
will normally be of little or no relevance if that happens to be the defendant’s own
home or workplace. However, even if it is a location where the defendant’s DNA
might not be expected to be found, there may be an innocent explanation for its
presence. One explanation, often favoured by defence lawyers, is ‘transference’.

36 See, for example, Walker v Budgen [2005] NSWSC 898 (7 September 2005); and Hibble
v B [2012] TASSC 59 (20 September 2012) dealing with a DNA sample taken from a
13-year old suspect.
37 R v Lisoff [1999] NSWCCA 364 (22 November 1999). The court ruled that the matter
was one that could be put before a jury for resolution, rather than require exclusion on
the grounds of unfair prejudice to the defendant: “There is nothing so extraordinary
about the conflict in the evidence presented in this case which would justify the con-
clusion that a careful and sensible jury, properly directed as to the relevant law and as to
the relevant evidence, could not decide in a reasoned and responsible way whether or
not the Crown had demonstrated beyond reasonable doubt that the body of evidence
supporting the Crown case should be preferred to the opposed body of evidence” [64].
Biometrics in criminal trials 101

Because DNA is found in even small biological samples, it may be transferred


through physical contact between persons or objects, and then onto other persons
or objects. A person’s DNA may be found at a location where he or she has never,
or not recently, been. In order for the prosecution to be able to use the presence of
DNA as proof of involvement in a crime, it may then be necessary to negate,
beyond reasonable doubt, the possibility of transference. This situation has arisen in
several noteworthy cases, including Hiller. The defendant in that case was charged
with the murder of his estranged partner. Part of the prosecution’s evidence was
that his DNA was found on the deceased’s pyjamas. However, none of the expert
witnesses at the trial were able to rule out the possibility of transference, through
the couple’s children:38

There is nothing in the evidence to exclude the possibility that the children
may have had some of the appellant’s DNA transferred to their sleeves or
other parts of their clothing when they hugged him at the end of a week spent
in his care, and then subsequently hugged their mother in a similar manner.
Nor, is there any reason to suppose that DNA left on their clothing after
contact with the appellant might not have been transferred to the deceased’s
pyjamas at some later stage when she had been handling that clothing.

This was the basis on which the murder conviction was quashed. However, on
appeal by the prosecution it was overturned and a re-hearing of the appeal was
ordered. This second appeal ordered a re-trial, at which the defendant elected to be
tried by judge alone, rather than before a jury as in the first trial, and he was
acquitted.39
In a more recent case, Fitzgerald, the transference problem was again raised by
the defence in a murder trial. On the prosecution’s case, the defendant’s DNA was
found on an object, a didgeridoo, in the house at which a fatal assault took place.
Because the possibility of secondary transfer could not be ruled out, the court
ultimately allowed an appeal and ordered that a verdict of acquittal be entered.40
In the 2013 case Maryland v King, the US Supreme Court upheld the use of
DNA sampling in the criminal justice system against the Fourth Amendment of the
US Constitution, which prohibits unreasonable searches and seizures, and requires
warrants to be issued by a judge and supported by probable cause. King argued that
the Maryland DNA Collection legislation violated the Fourth Amendment to the
US Constitution. Although the Maryland Court of Appeals found the legislation
was unconstitutional, the US Supreme Court held that taking DNA is a legitimate
procedure to identify arrestees.

38 Hillier v R [2005] ACTCA 48 (15 December 2005), (Higgins CJ and Crispin P), [60].
39 The High Court appeal was R v Hillier [2007] HCA 13 (22 March 2007); which was
followed by re-heard appeal in Hillier v R [2008] ACTCA 3 (6 March 2008); the final
acquittal is unreported.
40 Fitzgerald v The Queen [2014] HCA 28 (13 August 2014).
102 Biometrics in criminal trials

The majority opinion considered that the Fourth Amendment permits police to
undertake ‘routine identification processes’ in relation to arrestees,41 including
photographing and fingerprinting arrestees as part of the associated administrative
process.42 Further, that this is part of a legitimate ‘need for law enforcement officers
in a safe and accurate way to process and identify the persons and possessions they
must take into custody’43 and that DNA sampling is an extension of these more
established methods.44 Further, it considered that the cheek swap used to collect bio-
logical material was ‘quick’, ‘painless’ and ‘no more invasive than fingerprinting’.45
According to the dissenting view in the case, the Supreme Court’s finding pro-
motes the collection of DNA by police from individuals that have not committed
serious offences, or are even arrestees. Justice Scalia opined that the approach is a
shift towards a ‘genetic panopticon’46 and ‘[n]o matter the degree of invasiveness,
suspicionless searches are never allowed if their principal end is ordinary crime-
solving’.47
Roth (2013, p. 298) argues that by running an arrestee’s DNA profile against a
database, seeking a ‘cold hit’ against DNA collected at the scenes of unsolved
crimes, rather than a database of known offenders to establish his identity ‘suggests
that the state contemplates the arrest as a proxy for criminality rather than as a
means of covering all those in custody whose identification needs confirmation’.

Scientific basis
The science underlying DNA identification has been extensively assessed in crim-
inal proceedings around the world since the late 1990s. In a 2001 case that provides
a representative example, the Profiler Plus DNA matching technology based on
Polymerase Chain Reaction (PCR) analysis, that had been in use for over a decade,
was found to be sufficiently accepted within the scientific community to be a valid
means of identification in criminal trials. The judge stated:48

The evidence in the present case was clear and, in my view, overwhelming.
Whilst the Profiler Plus system is relatively new, it utilizes familiar technology
for amplification and inspection of STR loci which technology is widely,
almost universally, accepted in the relevant scientific community as reliable

41 Maryland v. King, 133 S. Ct. 1958, 1966 (2013), at 1976.


42 Ibid, quoting Cnty. of Riverside v. McLaughlin, 500 U.S. 44, 58 (1991).
43 Ibid at 1970.
44 Ibid at 1977.
45 Ibid at 1968.
46 Ibid at 1990.
47 Ibid at 1982.
48 R v Karger [2001] SASC 64 (29 March 2001), [229], [614] (Mullighan J) within a long
and highly detailed judgment in which virtually every aspect of the technology was
judicially considered. Although some of the primer sequences used in Profiler Plus had
not been disclosed by the manufacturer, this was not regarded as an impediment to
establishing reliability (Wiley & Hocking, 2003).
Biometrics in criminal trials 103

and accurate. The variations fundamental to the Profiler Plus system, namely
the particular loci and the number of them, the new primer sequences if they
are new, and the use of Genotyper, have clearly been shown to have been
accepted by the relevant scientific community as accurate and reliable. … The
evidence overwhelmingly established that the Profiler Plus system is generally
accepted throughout the forensic science community as reliable and accurate
in DNA analysis for the purposes of human identification, including with low
levels of DNA.

In many countries around the world, the main criteria for admissibility of opinion
evidence from experts are those found in the ‘specialised knowledge’ provisions of
evidence law rather than scientific criteria of reliability. This has developed from
the US case Daubert v Merrell Dow Pharmaceuticals, Inc.49 The Daubert standard is that
there be a field of specialised knowledge, that the witness have such knowledge
based on training, study or experience and that the opinions of the witness be
wholly or substantially based on this. The forensic use of DNA in criminal inves-
tigations is now routinely accepted as a field of specialised knowledge (Gans &
Urbas, 2002; Smith & Mann, 2015).
The first stage of DNA identification involves the generation of a profile from a
crime scene and its comparison with the defendant’s profile. The legal significance
of the presence or absence of a match has been explained as follows:50

A DNA profile taken from an evidence sample is compared to a sample pro-


vided by an individual. If the DNA profile taken from an evidence sample
does not match the DNA profile of a person, then that individual can be
conclusively excluded as being the source of the DNA from the evidence. If
the profiles of the evidence sample and an individual do match, then there are
two competing possibilities to explain the matching DNA profiles. The first
possibility is that the DNA profile match has occurred because the DNA has
originated from the person in question. The second possibility is that the DNA
match has occurred by chance. That is, that there is someone else in the
population who just happens to have the same DNA profile as the person in
question. The probability of the evidence (i.e. probability of the sample
matching the known or unknown person) given each of these scenarios is
calculated using statistical analysis. A population database is used to provide an
indication of the relevant prevalence of each of the alleles that were observed
in the population. The given ratio of the two probabilities is called the like-
lihood ratio.

49 Honeysett v The Queen [2014] HCA 29 (13 August 2014). The Daubert case is the US
Supreme Court decision of Daubert v Merrell Dow Pharmaceuticals, Inc [1993] USSC 99;
509 U.S. 579; 113 S.Ct. 2786; 125 L.Ed.2d 469; No. 92–102 (28 June 1993).
50 Aytugrul v R [2010] NSWCCA 272 (3 December 2010), [80] (McClellan CJ at CL)
citing Sulan J in R v Carroll [2010] SASC 156 (28 May 2010), [28].
104 Biometrics in criminal trials

In other words, a DNA match only provides a link between the defendant and a
crime on a probabilistic basis, whereas a non-match will exclude identification
conclusively (Gans & Urbas, 2002). The significance of such a match will depend
on the context and other evidence. However, this must be because any innocent
explanation for the presence of the defendant’s DNA at the crime scene, or on the
body of a sexual assault victim, is excluded (Julian & Kelty, 2012; Julian et al.,
2012). The analysis may be complicated further where the crime scene sample
contains ‘mixed profiles’ indicating that it contains the DNA of more than one
individual.51
Where there is considerable room for interpretation is in the significance and
proper presentation of the statistical analysis that accompanies a DNA match
(Goodman-Delahunty & Tait, 2006). This is because tools such as Profiler Plus
only use a fixed number of markers or loci from the non-coding genetic
sequences that are used in generating DNA profiles, meaning that two different
individuals could have the same profile within these parameters. This then
allows analysts to say that a match was found, such as between a crime scene
sample and one taken from the defendant during investigation, and to state the
approximate probability of this match being a result not of commonality of
origin but of a random match. This is typically expressed as a random match
probability relative to the general population or a subset of it, based on a
representative sample.
The composition and size of the sample used may be significant in supporting
the inferences to be drawn. In practice, population sample databases of only a few
hundred are accepted by the courts as being sufficiently discriminating to allow
valid statistical inferences to be drawn:52

Databases have been built up by which the probability that the DNA of
another person within the general population would match the DNA of the
deceased at particular genetic markers may be estimated … It is accepted that
the precision of the figures produced from any data base is dependent upon
the size of the sample; the larger the sample, the greater the precision in the
figures produced. The database for the RFLP results was compiled from the
testing of 500 people who had donated blood at the Red Cross Blood
Bank … The statistical validity of databases compiled from as low as 100 to
150 people is supported by a number of eminent scientists and scientific
bodies.

In the Pantoja case, a question arose about the appropriateness of using a general
database of profiles taken from a multicultural society where a majority of the

51 Tuite v The Queen [2015] VSCA 148 (12 June 2015); and R v Xie (No. 18) [2015]
NSWSC 2129 (28 July 2015).
52 R v Milat (1996) 87 A Crim R 446; see also R v To [2002] NSWCCA 247 (26 June
2002).
Biometrics in criminal trials 105

adults were of white European ethnicity, when the defendant was a member of a
distinctive group (identified as South American Quechua Indians).53 The court
held that this did not matter, as it was the racial characteristics of the (unknown)
offender that were relevant to the appropriateness of the statistical database, rather
than the ethnicity of the defendant. However, the statistical validity of the database
still had to be established, which led to a successful appeal, a re-trial and a second
appeal in which the conviction was finally affirmed.54

Juror comprehension
A frequently discussed question is whether juries are capable of understanding
complex scientific information such as biometric identification technology, and if they
are to be required to evaluate such evidence, the forms in which it should be presented
so as to best facilitate comprehension (Goodman-Delahunty & Wakabayashi, 2012). A
starting point is the view that complexity alone should not preclude scientific evidence
from being heard by a jury:55

Juries are frequently called upon to resolve conflicts between experts. They
have done so from the inception of jury trials. Expert evidence does not, as a
matter of law, fall into two categories: difficult and sophisticated expert evi-
dence giving rise to conflicts which a jury may not and should not be allowed
to resolve; and simple and unsophisticated expert evidence which they can.
Nor is it the law, that simply because there is a conflict in respect of difficult
and sophisticated expert evidence, even with respect to an important, indeed
critical matter, its resolution should for that reason alone be regarded by an
appellate court as having been beyond the capacity of the jury to resolve.

However, there are recognised dangers in the presentation of statistical identifica-


tion evidence, such as the ‘prosecutor’s fallacy’, which courts have had to consider
(discussed in Chapter 3). This fallacy, so called because it tends to assist the prose-
cution rather than the defence, involves misstating the estimated frequency of the
defendant’s DNA profile (for example, 1 in 1 million) as the likelihood that the
defendant left the crime scene DNA (which in a population of 23 million will be a
very different from 1 million to 1). The case of Keir resulted in a quashed

53 R v Pantoja [1996] NSWSC 57 (1 April 1996).


54 R v Pantoja [1998] NSWSC 565 (5 November 1998).
55 Velevski v The Queen (2002) 76 ALJR 402; [2002] HCA 4, [182] (Callinan and
Gummow JJ). This case did not concern DNA evidence but rather knife wounds and
expert opinion as to how they could have been inflicted. Although expert opinion
evidence is largely governed by s79 of the UEL, s80 does allow specialised knowledge
to be supplemented by ‘common knowledge’ as part of the expert’s reasoning, as it
provides: ‘Evidence of an opinion is not inadmissible only because it is about: (a) a fact
in issue or an ultimate issue, or (b) a matter of common knowledge’. Thus, an expert
witness may ‘have regard to matters that are within the knowledge of ordinary persons
in formulating his or her opinion’ (Gaudron J, [82]).
106 Biometrics in criminal trials

conviction on appeal, on the basis that the judge had committed the prosecutor’s
fallacy in summing up the evidence to the jury.56
With regard to DNA match probabilities, it has also been argued that mathe-
matically equivalent ways of expressing the same information can have different
levels of persuasiveness to a jury. In the case of Aytugrul, the following evidence
was at issue (Urbas, 2012):57

A hair found on the deceased’s thumbnail had been subjected to mitochon-


drial DNA testing. The results of that testing showed two things: first, that the
appellant could have been the donor of the hair and, second, how common
the DNA profile found in the hair was in the community. This second aspect
of the results was expressed in evidence both as a frequency ratio and as an
exclusion percentage. The expert who had conducted the test gave evidence
to the effect that one in 1,600 people in the general population (which is to
say the whole world) would be expected to share the DNA profile that was
found in the hair (a frequency ratio) and that 99.9 per cent of people would
not be expected to have a DNA profile matching that of the hair (an exclusion
percentage).

The defence sought to argue that there was unfair prejudice in putting the per-
centage before the jury, as this was overly persuasive and invited a subconscious
‘rounding up’ to 100 per cent certainty. However, this was not accepted by the
relevant court given the expert’s explanations:58

The unfair prejudice said to arise in this case was alleged to flow from the use
of a percentage figure, which carried a “residual risk of unfairness deriving
from the subliminal impact of the raw percentage figures” by way of rounding
up the percentage figure to 100. If the exclusion percentage were to be
examined in isolation, the appellant’s arguments appear to take on some force.
But to carry out the relevant inquiry in that way would be erroneous. In this
case, both the frequency ratio and the manner in which the exclusion per-
centage had been derived from the frequency ratio were to be explained in
evidence to the jury. The risk of unfair prejudice – described by the appellant
as the jury giving the exclusion percentage “more weight … than it
deserved” – was all but eliminated by the explanation.

56 R v Keir [2002] NSWCCA 30 (28 February 2002). The defendant was convicted on the
re-trial, and an appeal against that conviction was unsuccessful: Keir v R [2007]
NSWCCA 149 (6 June 2007). The prosecutor’s fallacy was also discussed in Aytugrul v
R [2010] NSWCCA 272 (3 December 2010).
57 Aytugrul v The Queen [2012] HCA 15 (18 April 2012), (French CJ, Hayne, Crennan and
Bell JJ), [2] (note omitted after the words ‘frequency ratio’, as follows: ‘Sometimes called
a “random occurrence ratio” or a “frequency estimate”’). Heydon J agreed with the
majority in a separate judgment.
58 Aytugrul v The Queen [2012] HCA 15 (18 April 2012), (French CJ, Hayne, Crennan and
Bell JJ) [30] (note omitted).
Biometrics in criminal trials 107

The concurring judgment in Aytugrul suggested that the jury could be trusted to
work out the statistical issues, even though they were difficult:59

No doubt both the “frequency estimate” and the “exclusion percentage”


evidence, like many other aspects of the expert evidence, were difficult for the
jury to deal with. The field is arcane. But any criminal jury of 12 is likely to
contain at least one juror capable of realising, and demonstrating to the other
jurors, that the frequency estimate was the same as the exclusion percentage.
Further, detailed evidence was given about how the “exclusion percentage”
evidence was derived from the concededly admissible “frequency estimate”
evidence, and how their significance was identical.

In a 2015 case, the question arose whether the scientific reliability of a new statistical
technique applied to the analysis of small amounts of DNA used in matching was a
matter going to the admissibility of expert evidence. The court held that it does
not, but rather affects the probative value of the evidence and the potential pre-
judicial effect that its presentation to the jury may have. In this case, the appeal
judges agreed with the conclusions reached by the trial judge in relation to both
aspects of the evidence, holding that the probative value of the evidence was not
outweighed by the alleged prejudicial effect:60

In my view, the DNA evidence viewed as a whole is highly probative. It may


be used by a jury to put the accused both inside and outside the house on the
night in question. This is so, notwithstanding only small amounts of DNA
matching that of the accused were found on the relevant items inside the
house, and that other people also contributed to the DNA found on these
items. The limitations in the STRmix methodology acknowledged by
the prosecution witnesses must have some effect on the quality of the DNA
evidence. However, I am not persuaded that they erode its probative value to
any significant degree. Whilst the amounts of DNA may be small in some
cases, the fact that DNA matching the accused’s was found on a number of
items both inside and outside the house in my view fortifies the overall probative
value of the DNA evidence, which I assess to be high.
In this case, the danger of unfair prejudice is said to arise from a particular
issue identified by Ms Taupin in the STRmix analysis of Item 1–2, although it
has wider consequences as it is a product of the way in which STRmix works
generally. Ms Taupin identified, having closely examined the STRmix case-
notes, that at two of the 10 markers the probability of the evidence given the
prosecution hypothesis was very low, yet the likelihood ratios for the markers
favoured the prosecution hypothesis. Ms Taupin pointed out that this means
that STRmix produces likelihood ratios strongly favouring the prosecution

59 Aytugrul v The Queen [2012] HCA 15 (18 April 2012), [75] (Heydon J).
60 Tuite v The Queen [2015] VSCA 148 (12 June 2015), [122]-[124].
108 Biometrics in criminal trials

hypothesis in circumstances where there is only very weak evidence to support


that hypothesis. That, in combination with the very high likelihood ratios
generated by STRmix, is said to be unfairly prejudicial to the accused and not
something that should be allowed in a criminal trial.

A more subtle problem relating to juror comprehension is sometimes referred to


as the ‘CSI effect’ by reference to a popular television series depicting forensic
science in investigations (Wise, 2010). The perceived problem is that, even where
experts provide accurate information about the limitations and confidence levels
of their analysis, the jury may still be overwhelmed by the scientific nature of the
evidence and give it more weight than it deserves.61 The same may be true
where evidence such as a DNA match is of marginal probative value in a
prosecution:62

Moreover, one of the dangers associated with DNA evidence, is what has
come to be known as the ‘CSI effect’. The ‘CSI effect’ is a reference to the
atmosphere of scientific confidence evoked in the imagination of the average
juror by descriptions of DNA findings. As we have explained, as a matter of
pure logic, the DNA evidence has little or no probative value. By virtue of its
scientific pedigree, however, a jury will likely regard it as being cloaked in an
unwarranted mantle of legitimacy – no matter the directions of a trial
judge – and give it weight that it simply does not deserve. The danger of
unfair prejudice is thus marked, and any legitimate probative value is, at best,
small.

Conversely, jurors exposed to fictional representations of forensic science may


unrealistically expect to be presented with DNA or other biometrics in every case,
and when this does not eventuate, may wrongly view this as a defect in the pro-
secution’s case (Roux et al., 2012; Whiley & Hocking, 2003; Meyers, 2007).63

DNA databases
As increasing numbers of DNA profiles have been collected during criminal
investigations, these have been stored on police databases, leading to the possibility
of repeated use including by searching for a ‘cold hit’ between a crime scene
sample and a profile already added to the database.64 Forensic databases have

61 The ‘CSI effect’ has also been referred to as the ‘white coat’ effect: Morgan v R [2011]
NSWCCA 257 (1 December 2011), [145], cited in R v MK [2012] NSWCCA 110 (4
June 2012).
62 DPP v Wise (a pseudonym) [2016] VSCA 173 (21 July 2016), [70]; DPP v Massey (a
pseudonym) [2017] VSCA 38 (6 March 2017), (Weinberg JA), [24].
63 R v Drummond (No. 2) [2015] SASCFC 82 (5 June 2015).
64 See, for example, Sleiman v Murray [2009] ACTSC 82 (15 July 2009); and R v Smith [No
1] [2011] NSWSC 725 (26 May 2011).
Biometrics in criminal trials 109

become a powerful investigative tool (Smith, 2016). Not surprisingly, then, the
conditions under which a defendant’s profile may have been obtained, stored
and retained on a DNA database have become the subject of scrutiny in
criminal trials.
Forensic procedures legislation governs how forensic samples stored in DNA
databases may be used. The legislation distinguishes between volunteers, arrested
persons and convicted persons, with different requirements applying to each class.
Permissible matching is regulated by matching tables in the legislation. Finally,
requirements relating to use and removal of profiles from forensic databases are set
out in detail. The failure of police or other officials to comply with the require-
ments of forensic procedures legislation can readily lead to exclusion of evidence
obtained from a stored DNA profile.65
The unlawful retention of DNA profiles on databases was at issue in the landmark
Marper case in the United Kingdom.66 Two individuals, one of them a 12-year-
old, whose profiles had been entered on the database when they were arrested for a
reportable offence, sought to have them removed when they were not convicted.
The House of Lords found in favour of the police, arguing that the retention was
lawful under applicable legislation, but the European Court of Human Rights
ruled otherwise, holding that the ‘blanket and indiscriminate nature’ of the retention
regime under the legislation did not strike a proper balance between public and
private interests (Smith, 2016).
A further consideration regarding the use of forensic DNA databases is the possibi-
lity of searching for partial matches, also known as ‘familial searching’. This involves
recording and investigating matches that nearly but do not fully coincide, and so
cannot be from the same individual. However, it may be that the crime scene
sample came from a close relative of someone who is on the DNA database, which
provides an investigative lead even when the actual offender’s profile is not on the
database. This significantly extends the scope of ‘cold hit’ matching processes, and
has been used to solve serious crimes in other countries. In many jurisdictions,
forensic procedures legislation does not specifically regulate partial matching, but
appears to allow its use (Smith & Urbas, 2012).

References
Buckland, P. (2014). Honeysett v The Queen (2014): Opinion evidence and reliability: A
sticking point. Adelaide Law Review 35(2), 449.
Cashman, K. & Henning, T. (2012). Lawyers and DNA: Issues in understanding and chal-
lenging the evidence. Current Issues in Criminal Justice 24, 69.

65 Examples include Hibble v B [2012] TASSC 59 (20 September 2012); R v Dean [2006]
SADC 54 (25 May 2006).
66 R v Marper and S (2002) EWCA Civ 1275 (Court of Appeal); R v Marper and S (2004)
UKHL 39 (House of Lords); Case of S and Marper and the United Kingdom [2008] ECHR
1581.
110 Biometrics in criminal trials

Coyle, I., Field, D. & Wenderoth, P. (2009). Pattern recognition and forensic identification:
The presumption of scientific accuracy and other falsehoods. Criminal Law Journal 33, 214.
Edmond, G. (2015). What lawyers should know about the forensic “sciences”. Adelaide Law
Review 37, 33.
Edmond, G. & San Roque, M. (2012). The cool crucible: Forensic science and the frailty of
the criminal trial. Current Issues in Criminal Justice 24(1), 51.
Edmond, G. & San Roque, M. (2014). Honeysett v The Queen: Forensic science, ‘specialised
knowledge’ and the uniform evidence law. Sydney Law Review 36(2), 323.
Edwards, K. (2006). Cold hit complacency: The dangers of DNA databases re-examined.
Current Issues in Criminal Justice 18(1), 92.
Findlay, M. & Grix, J. (2003). Challenging forensic evidence? Observations on the use of
DNA in certain criminal trials. Current Issues in Criminal Justice 14(3), 269.
Gans, J. (2005). DNA identification and rape victims. University of New South Wales Law
Journal 28(1), 272.
Gans, J. (2007a). Much repented: Consent to DNA sampling. University of New South Wales
Law Journal 30(3), 579.
Gans, J. (2007b). Catching Bradley Murdoch: Tweezers, pitchforks and the limits of DNA
sampling. Current Issues in Criminal Justice 19, 34.
Gans, J. (2007c). The Peter Falconio investigation: Needles, hay and DNA. Current Issues in
Criminal Justice 18(3), 415.
Gans, J. (2011). A tale of two High Court forensic cases. Sydney Law Review 33(3), 515.
Gans, J. & Urbas, G. (2002). DNA evidence in the criminal justice system. Trends and
Issues in Crime and Criminal Justice No. 226. 3. Canberra: Australian Institute of
Criminology.
Goodman-Delahunty, J. & Tait, D. (2006). DNA and the changing face of justice. Australian
Journal of Forensic Sciences 38, 97.
Goodman-Delahunty, J. & Wakabayashi, K. (2012). Adversarial forensic science experts: An
empirical study of jury deliberation. Current Issues in Criminal Justice 24(1), 85.
Haesler, A. (2006). DNA in court. Journal of the Judicial Commission of New South Wales.
8(1), 121.
Julian, R. & Kelty, S. (2012). Forensic science and justice: From crime scene to court and
beyond. Current Issues in Criminal Justice 24, 1.
Julian, R., Kelty, S. & Robertson, J. (2012). Get it right the first time: Critical issues at the
crime scene. Current Issues in Criminal Justice 24(1), 25.
Krone, T. (2012). Raising the alarm? Role definition for prosecutors in criminal cases. Australian
Journal of Forensic Sciences 44(1), 15.
Meyers, L. (2007). The problem with DNA. Monitor on Psychology 38, 52.
Rayment, K. (2010). Faith in DNA: The Vincent Report. Journal of Law, Information and
Science 20(1), 238.
Roth, A. (2013). Maryland v. King and the wonderful, horrible DNA revolution in law
enforcement. Ohio State Journal of Criminal Law 11, 295.
Roux, C., Crispino, F. & Ribaux, O. (2012). From forensics to forensic science. Current
Issues in Criminal Justice 24(1), 7.
Smith, M. (2016). DNA Evidence in the Australian Legal System. Chatswood, NSW:
LexisNexis Butterworths.
Smith, M. & Mann, M. (2015). Recent developments in DNA evidence. Trends and Issues
in Crime and Criminal Justice No. 506. 1. Canberra: Australian Institute of Criminology.
Smith, M. & Urbas, G. (2012). Regulating new forms of forensic DNA profiling under
Australian legislation: Familial matching and DNA phenotyping. Australian Journal of Forensic
Sciences 44, 63.
Biometrics in criminal trials 111

Urbas, G. (2012). The High Court and the admissibility of DNA evidence: Aytugrul v The
Queen [2012] HCA 15. Canberra Law Review 11(1), 89.
Whiley, D. & Hocking, B. (2003). DNA: Crime, law and public policy. University of Notre
Dame Australia Law Review 5, 37.
Wise, J. (2010). Providing the CSI treatment: Criminal justice practitioners and the CSI
effect. Current Issues in Criminal Justice 21(3), 383.
7
BIOMETRICS IN CRIMINAL APPEALS
AND POST-CONVICTION REVIEWS

Introduction
This chapter discusses the ways in which biometric identification has featured in
criminal appeals and other reviews of criminal convictions. Appellate review allows
the criminal justice system to recognise and correct errors, including wrongful
convictions and other miscarriages of justice. However, as appeal rights are limited,
other forms of review also play an important role. Discussed in this chapter are
innocence projects, judicial inquiries and review commissions that have used bio-
metrics in their efforts to uncover the truth about past crimes.

Criminal appeals
The first avenue of redress for most convicted offenders who claim that they have
been the subject of a miscarriage of justice is to lodge an appeal. Depending on
conditions imposed on such applications, including time limits and leave require-
ments, this may result in a conviction being overturned. In general, appellate courts
can then either order a re-trial, or order a different verdict. In those rare cases
where actual innocence can be established, the only appropriate outcome is
quashing of the conviction and its replacement with a verdict of acquittal.1
Criminal appeals took on a distinctive form in the early twentieth century with
the establishment in the United Kingdom of the Court of Criminal Appeal in 1907
(Corns & Urbas, 2008). Many jurisdictions empower a court of appeal to overturn

1 Actual innocence need not be established in order for an appeal to succeed, nor is this
often possible. Rather, it is sufficient that enough doubt is cast on the conviction that it
must be regarded as ‘unsafe and unsatisfactory’: M v R (1994) 181 CLR 487; see also
Gipp v R (1998) 194 CLR 106 and Chidiac v R (1991) 171 CLR 432.
Biometrics in criminal appeals and reviews 113

a conviction according to the following ‘common form’ grounds (Urbas, 2002;


Corns & Urbas, 2008):2

i that the verdict was unreasonable or unsupportable having regard to the


evidence;
ii that there was an error of law; or
iii that on any other ground there was a miscarriage of justice.

Although biometric evidence can be involved in any of these grounds of appeal,


the possibility of using new evidence to cast doubt on a criminal conviction is best
supported under the third limb. Evidence will exculpate the appellant if it tends to
show that someone else committed the crime.3 The courts of appeal have the
power to receive new evidence in an appeal against conviction, including
appointing a person with special expert knowledge as an assessor.4 If the evidence
on appeal differs from that admitted at the trial, the appellate judges must make an
independent assessment of the case against the appellant based on the new
evidence.

Criminal appeals and biometrics


The most obvious way in which biometrics can feature in a criminal appeal is by
way of linking someone other than the appellant to the crime. For example, where
a conviction was based largely on eye witness identification rather than forensic
analysis, new evidence such as DNA testing of samples retained from the investi-
gation may yield powerful exculpatory evidence (ALRC, 2003: [45.1]). Even
where forensic analysis was involved at the trial phase, later testing using improved
techniques may yield different results by the time of a later appeal.5 A court of
appeal may remedy a miscarriage of justice, but requirements of obtaining leave
and time limits may make this a difficult option for convicted persons to pursue.
Additionally, the fact that only one appeal to a court of appeal is usually possible
leaves unsuccessful appellants no other option but an appeal to the Supreme Court
of the United States, the Supreme Court of the United Kingdom or the High
Court of Australia. However, the High Court of Australia, for example, has con-
sistently ruled that it is not a Court of Criminal Appeal and has no power to

2 Subject to the ‘proviso’ that the conviction may be allowed to stand if the court is of
the opinion that notwithstanding that the appellant has made out one or more of these
grounds, no substantial miscarriage of justice has occurred (Penhallurick, 2003): see, for
example, Criminal Appeal Act 1912 (NSW), s6(1).
3 Button v The Queen [2002] WASCA 35 (25 February 2002), discussed in Goldingham
(2002).
4 See for example, Criminal Appeal Act 1912 (NSW), s12. Appeals against sentence are not
discussed here, but note that questions of finality and double jeopardy also arise in
relation to re-sentencing (Urbas, 2012).
5 An example is the Queensland case involving Frank Button discussed later in this
chapter.
114 Biometrics in criminal appeals and reviews

receive new evidence, including DNA evidence (Hamer, 2015; Milne, 2015; see
also Urbas, 2002).6

Second and subsequent appeals


In view of the limits on criminal appeals, some jurisdictions have engaged in law
reform to allow second or subsequent appeals to its Court of Criminal Appeal
(Sangha, 2015). Such a provision allows a higher court to hear an appeal against
conviction even where there has already been a previous appeal, if satisfied that
there is fresh and compelling evidence that should, in the interests of justice, be
considered. These requirements are defined as follows:7

Evidence relating to an offence is –

a “fresh” if –
b it was not adduced at the trial of the offence; and
c it could not, even with the exercise of reasonable diligence, have been
adduced at the trial; and
d “compelling” if –
e it is reliable; and
f it is substantial; and
g it is highly probative in the context of the issues in dispute at the trial of
the offence.

The form that such evidence might take includes fresh and compelling biometric
analysis. For example, a crime scene sample collected before the trial may not have
been tested, or testing may not have yielded results, due to the limitations of for-
ensic analysis at the time. With advances in techniques, such as testing using small
or degraded biological samples (as discussed in Chapter 3), testing may become
possible years afterwards. This could show that the convicted person is not the
offender. By this time, the convicted person may have already appealed unsuc-
cessfully. The new legislation allows a second or subsequent appeal using the
exonerating biometric evidence (Sangha & Moles, 2015). This is the model used
by some innocence projects, discussed later in this chapter.

Double jeopardy and appeals against acquittal


The criminal law has for centuries restricted the ability of the prosecution to appeal
against acquittals, based on the precept that it is unjust to expose a person to

6 Mickelberg v The Queen (1989) 167 CLR 259; Eastman v The Queen (2000) 203 CLR 1;
Re Sinanovic’s Application [2001] HCA 40; (2001) 180 ALR 448.
7 Criminal Law Consolidation Act 1935 (SA), s353A inserted by the Statutes Amendment
(Appeals) Act 2013 (SA); and Criminal Code Amendment (Second or Subsequent Appeal for
Fresh and Compelling Evidence) Act 2015 (Tas).
Biometrics in criminal appeals and reviews 115

punishment more than once in relation to the same crime. This is encapsulated in the
rule against double jeopardy (MCCOC, 2003; Burton, 2004; Cowdery, 2005; Griffith
& Roth, 2006) operating through the pleas of autrefois convict and autrefois acquit.8
Historically, the impetus for reform of double jeopardy laws has arisen from
specific high profile cases. In Australia, for example, these arose largely in response
to a child murder case in Queensland. Convicted of the murder of a 17-month-old
baby in 1985, partly on the basis that his distinctive teeth were matched to a bite
mark on the body of the victim, Raymond Carroll appealed successfully, so that
the Queensland Court of Appeal quashed the conviction and entered a verdict of
acquittal. This meant that a second prosecution for murder was precluded by
double jeopardy rules. However, Carroll had given evidence at his trial denying
involvement in the abduction and killing of the child, and on the basis of improved
forensic odontological methods, the prosecution brought a charge of perjury. He
was convicted on that second charge in 2000, on a jury verdict, and again appealed
successfully, with the Court of Appeal accepting that the perjury conviction was in
essence a re-trial of the murder case under a different charge. The High Court
agreed, meaning that Carroll could never be re-convicted.9 Public dissatisfaction
with this outcome together with some academic and political support for a change
in the law led to the enactment of legislation allowing appeals against acquittals in
limited circumstances (Corns, 2003; Burton, 2004).
In Queensland, the provision applies only to a re-trial for murder where there is
fresh and compelling evidence against the acquitted person and it is in the interests
of justice to overturn the acquittal and order a re-trial.10 In New South Wales, an
application may be made in relation to any life sentence offence, including murder
and certain drugs and sexual offences. However, there have been no murder
re-convictions following an overturned acquittal to date.11
Several jurisdictions have adopted similar double jeopardy reforms, preceded by
changes to double jeopardy laws in the United Kingdom (MCCOC, 2003),
allowing re-trials after Crown appeals against acquittal.12 The basis for these

8 See, for example, Criminal Procedure Act 1986 (NSW), s156.


9 The Queen v Carroll (2002) 213 CLR 635. This was a prosecution appeal following the
Court of Appeal decision.
10 Criminal Code, Chapter 68, added by the Criminal Code (Double Jeopardy) Amendment Act
2007 (Qld).
11 Crimes (Appeal and Review) Act 2001 (NSW), Part 8, added by the Crimes (Appeal and
Review) Amendment (Double Jeopardy) Act 2006 (NSW). These provisions have been
considered in R v PL [2009] NSWCCA 256 (8 October 2009); Atkins v Attorney General
of New South Wales [2016] NSWSC 1412 (12 October 2016). There has also been
public disquiet about the so-called ‘Bowraville murders’ case, with political pressure to
use the double jeopardy reforms in NSW to re-open the acquittal of a key suspect,
based on a novel argument that evidence is to be considered fresh due to a change in its
admissibility after amendments to the Evidence Act 1995 (NSW): http://www.ruleoflaw.
org.au/double-jeopardy-bowraville-murders
12 By contrast, in the United States the rule against double jeopardy is a constitutional
safeguard that cannot be abrogated by federal or state legislation (Thomas, 1998; Rud-
stein, 2004).
116 Biometrics in criminal appeals and reviews

reforms was explained as follows by Lord Justice Auld who conducted a review of
that country’s court system (Auld, 2001) and posed the following questions:

If there is compelling evidence … that an acquitted person is after all guilty of


a serious offence, then, subject to stringent safeguards …, what basis in logic or
justice can there be for preventing proof of that criminality? And what of the
public confidence in a system that allows it to happen?

Similar to the laws allowing second and subsequent appeals against convictions in
some jurisdictions, appeals against acquittal are generally limited to those cases in
which there is fresh and compelling evidence of guilt, which could be in the form
of new or improved biometric identification. For example, the evidence in an
initial prosecution case may be insufficient to identify the accused as the offender
beyond reasonable doubt. Later biometric analysis might provide a more conclusive
link, which together with the other available evidence, might then be sufficient to
safely convict the accused.13
However, post-conviction or post-acquittal testing depends on the preservation
of evidence that can be tested (Urbas, 2002; Weathered, 2003; Weathered &
Blewer, 2009; Hamer, 2014). As noted later in relation to the Chamberlain case, the
destruction of forensic samples during or after laboratory testing can deny access to
post-trial testing. This has led to calls for legislative requirements for sample pre-
servation (ALRC, 2003). Despite the possibility of appeals based on fresh and
compelling evidence, either against a conviction or an acquittal, these legal
mechanisms appear to be rarely exercised in practice (Hamer, 2014).

Post-conviction reviews

Innocence projects
The potential for remedying wrongful convictions with the help of biometrics such
as DNA identification has been the impetus for the establishment of many inno-
cence projects, which are usually based in universities, as discussed in Chapter 3
(Hamer, 2014):14

Fortunately, DNA profiling technology can provide strong proof of factual


innocence. If biological material believed to be that of the perpetrator is
available, and a DNA profile from that material does not match the DNA
profile of the defendant, this provides practical certainty that the defendant is
not the perpetrator. The strength of DNA profiling evidence in such cases is

13 Though note that there is some doubt about whether DNA evidence on its own could
ever be sufficient for a conviction: see Ligertwood (2011) and the case of Forbes v The
Queen [2010] HCATrans 120 (18 May 2010).
14 Notes omitted. See also Christian (2001); De Foore (2002); Urbas (2002) and Weath-
ered (2004).
Biometrics in criminal appeals and reviews 117

quite exceptional. Generally it is just as difficult achieving certainty about


innocence as it is about guilt. For this reason, innocence projects generally
limit themselves to cases where DNA may be available.

The original innocence project was established in the United States. Based in the
Cardozo Law School in New York, it has made over 350 exonerations using DNA
evidence. In addition, the work of this and similar bodies has been instrumental in
identifying and addressing the causes of wrongful convictions, with inaccurate
eyewitness identification (discussed in Chapter 6) being the leading cause:15

Eyewitness misidentification is the greatest contributing factor to wrongful


convictions proven by DNA testing, playing a role in more than 70 per cent
of convictions overturned through DNA testing throughout the United States.

The University of Bristol has been the leading innocence project in the United
Kingdom and operated a specialist pro bono clinic from 2005 to 2015. Since that
time there have been over 30 other innocence projects established at universities
throughout England and Wales.16 In Australia, similar bodies have been set up at
Griffith University, Edith Cowan University and the University of Technology in
Sydney.17 Most of these follow the emphasis on post-conviction DNA testing shown
to have been successful in overseas jurisdictions such as the United States.
The model of the innocence project has been followed in some cases by the
establishment of an administrative body by governments to review claimed mis-
carriages of justice. The following provides an example of the functions of one
such body, set out in legislation:18

a to consider any application under this Division from an eligible convicted person
and to assess whether the person’s claim of innocence will be affected by DNA
information obtained from biological material specified in the application,
b to arrange, if appropriate, searches for that biological material and the DNA
testing of that biological material,

15 See, for example, https://www.innocenceproject.org/causes/eyewitness-misidentifica


tion Other causes of wrongful conviction include false confessions, investigation or
prosecution misconduct, poor defence representation, and forensic errors.
16 University of Bristol Law School. Retrieved from http://www.bristol.ac.uk/law/study/
law-activities/innocenceproject
17 See, for example, https://www.griffith.edu.au/criminology-law/innocence-project;
http://www.ecu.edu.au/schools/arts-and-humanities/research-and-creative-activity/sell
enger-centre-for-research-in-law-justice-and-social-change/criminal-justice-review-p
roject/overview; https://www.uts.edu.au/research-and-teaching/our-research/law-resea
rch-centre/about-us/history
18 Crimes (Appeal and Review) Amendment (DNA Review Panel) Act 2006 (NSW), since
repealed, added provisions establishing the panel to the Crimes (Appeal and Review) Act
2001 (NSW). These were then removed by the Crimes (Appeal and Review) Amendment
(DNA Review Panel) Act 2013 (NSW) with effect from 23 February 2014.
118 Biometrics in criminal appeals and reviews

c to refer, if appropriate, a case to the Court of Criminal Appeal under this


Division for review of a conviction following the receipt of DNA test results,
and
d to make reports and recommendations to the Minister on systems, policies and
strategies for using DNA technology to assist in the assessment of claims of
innocence (including an annual report of its work and activities, and of statis-
tical information relating to the applications it received).

Persons meeting the description of ‘eligible convicted offender’ were provided


with the opportunity to apply to the review panel to have their cases considered.
This term was defined to include those serving sentences of 20 years or more,
whether still in custody or on parole, or those whose ‘special circumstances’ war-
ranted the application. Importantly, the application had to make a case that DNA
information would assist in exonerating the person:19

A convicted person is eligible to make an application to the Panel if, and only
if, the person’s claim of innocence may be affected by DNA information
obtained from biological material specified in the application.

This application was to be assessed by the six member panel, which included a
former judicial officer, a representative of the Attorney-General’s Department, a
victims’ representative, a police representative and prosecution and defence law-
yers. This review panel ceased operations in 2014, apparently not having referred
any cases to a court of criminal appeal for review (Hamer & Edmond, 2013).

Judicial inquiries and commissions


Historically, the task of correcting miscarriages of justice fell to the executive rather
than the judiciary, at least until the creation of the Court of Criminal Appeal
(Spencer, 1982). The main mechanism used was the pardon. The prerogative of
mercy is generally preserved under statute, which often also contains provisions
allowing the establishment of reviews such as judicial enquiries into suspected mis-
carriages of justice (Caruso & Crawford, 2014). The long-standing institution of the
Royal Commission can also be used to investigate alleged wrongful convictions. The
result of an inquiry may be the pardon and release of the imprisoned person.
Although these are powerful mechanisms for the correction of miscarriages of
justice, they are established on an ad hoc basis, often only after years of public agi-
tation, and there thus is no predictability that such a body will be available in every
case. This has led some observers to call for a Criminal Cases Review Commission
(CCRC), based on models developed in the United Kingdom, as discussed in

19 Crimes (Appeal and Review) Amendment (DNA Review Panel) Act 2006 (NSW), inserting
s89 (since repealed) into the Crimes (Local Courts Appeal and Review) Act 2001 (NSW).
Biometrics in criminal appeals and reviews 119

Chapter 3 (Weathered & Blewer, 2009; Hamer, 2014). This has been discussed as
follows (Weathered, 2013, p. 450):

The most comprehensive body created to correct wrongful convictions is the


Criminal Cases Review Commission (‘CCRC’) based in Birmingham, UK,
which operates for England, Wales and Northern Ireland (for relevant legisla-
tion in England, Wales and Northern Ireland, see the Criminal Appeal Act 1995
(UK) c 35, s 8; see also the Ministry of Justice, Criminal Cases Review
Commission website at <http://www.ccrc.gov.uk>).
Scotland and Norway have also each established their own CCRC, while
other countries including Australia are still considering whether to create such
a body. The CCRC is an independent, government-funded body that inves-
tigates claims of miscarriages of justice with the ability to refer cases to their
courts of appeal. DNA innocence testing is incorporated within its broad and
extensive powers of investigative review.

The establishment of a CCRC in the United States or Australia would need to


navigate the federal system of criminal laws and courts, so that cases arising in each
state would be referred to that particular jurisdiction’s relevant appellate court.

Miscarriage of justice cases


The remainder of this chapter reviews some of the most significant miscarriage of
justice cases in Australia where forensic evidence, such as biometrics, played a
substantial role either in the initial prosecution, or in the appeal or other review
that followed it. The expression ‘miscarriage of justice’ is used to refer to ‘a false
attribution of guilt, that is, finding someone guilty who was actually innocent’
(Young, 2010). In serious cases, this leads to wrongful imprisonment (Zdenkowski,
1993). Miscarriage of justice is to be distinguished from a conviction that is over-
turned because of some procedural error at trial, such as a wrong decision on a
question of admissibility of evidence (Spencer, 1992). The cases discussed below
are the relatively few exonerations in Australia based on extensive review, either by
a court or another review mechanism.20

Colin Campbell Ross


In 1922, Colin Campbell Ross was convicted of the murder of 12-year-old Alma
Tirtschke. After a jury trial, he was convicted and sentenced to death. He then
appealed unsuccessfully to higher courts. The case was as follows:21

20 Further literature on miscarriages of justice in the United Kingdom and the United
States is discussed by Roach (2015).
21 Ross v The King (1922) 30 CLR 246 (Knox C.J., Gavan Duffy and Starke JJ, with
Higgins J concurring). Isaacs J delivered a dissenting judgment.
120 Biometrics in criminal appeals and reviews

In the present case, the nude body of a young girl, twelve years of age,
was found lying dead in an alley off Little Collins Street, Melbourne.
Medical evidence disclosed that the cause of death was strangulation from
throttling, that there were bruises and abrasions which indicated violence,
and that there was a recent tear at the lower border of the hymen which
passed completely through the hymen into the tissue of the vaginal wall.
Evidence was also adduced by the Crown from which a jury might infer
that this child had gone into an arcade known as the Eastern Arcade, in
which the prisoner had a wine saloon, that she was there enticed by the
prisoner into his wine saloon and was carnally known and killed by him.
The prisoner, who gave evidence on his own behalf, did not suggest that
he had killed the child in circumstances that might reduce the act from
one of murder to one of manslaughter. He admitted that he had noticed a
young girl, similar in appearance to the dead child, in the Arcade; but he
denied that he had spoken to her or that she had been in his wine saloon,
and he denied that he had anything to do directly or indirectly with the
death of the murdered child. The jury found the prisoner guilty of the
murder of the child.

On appeal, reference was made to ‘evidence which went to identify the hair of the
dead child with that found on certain blankets’, but this was not pivotal in the
Court’s decision. Rather, the majority accepted that the trial judge had given cor-
rect directions to the jury on issues including an alleged confession by the accused.
Special leave to appeal was therefore rejected by the High Court, and the sentence
of execution was carried out a few weeks later.22
However, that was not the last of the legal proceedings arising from the case. A
researcher in the 1990s made the surprising discovery that the hair samples
collected at the time of the girl’s death were still in the police archives, and
re-testing was done by both the Victorian Institute of Forensic Medicine and the
Australian Federal Police laboratory. This confirmed that the hair found on
blankets in the defendant’s home did not match the scalp sample of the dead girl.
The Victorian Attorney-General referred the matter to the Supreme Court in
2007, which concluded unanimously that the conviction could not stand.23
Crucial to this finding was a report by Dr James Robertson, then Director of
Forensic Services at the Australian Federal Police, and an expert in forensic hair
comparisons, whose analysis concluded that ‘the hairs recovered from the brown-
grey blanket could not have come from the deceased, Tirtschke’.24 Relatives of
both the prisoner and the victim signed a petition for mercy, and the Governor

22 The High Court decision is dated 5 April 1922, and Ross was hanged on 24 April 1922.
23 Re Colin Campbell Ross [2007] VSC 572 (20 December 2007) (Teague, Cummins and
Coldrey JJ).
24 The forensic report of Dr James Robertson is included in full in the Supreme Court’s
judgment (at [80]), in recognition of its importance in resolving the case.
Biometrics in criminal appeals and reviews 121

of Victoria posthumously pardoned Colin Campbell Ross in May 2008, some 86


years after his hanging.25

The Chamberlains
The Chamberlain case has been highly influential on the role of forensics in criminal
proceedings. After two coronial inquiries into the 1980 disappearance of their baby
daughter, Azaria, from a camping ground near Uluru in central Australia, Lindy
and Michael Chamberlain were committed to stand trial in the Supreme Court of
the Northern Territory. The prosecution case was that Lindy had killed Azaria in
the family car and the couple had disposed of the body. The defence argued at trial
that Azaria had been taken by a dingo. The prosecution case relied heavily on
forensics, and after a highly publicised jury trial, Lindy was convicted of murder
with Michael convicted as an accessory.
Appeals to higher courts were unsuccessful.26 Continuing public disquiet with
the convictions led to the establishment of a Royal Commission in 1987, which
found profound flaws in the forensic evidence adduced by the prosecution. This
included an alleged bloody handprint on Azaria’s clothing, an expert’s purported
identification of damage to the clothing as caused by scissors rather than dingo
teeth and, most critically, the identification of supposed foetal blood under the
dashboard of the car. This was systematically discredited by the Commissioner,
who observed that:27 ‘evidence was given at trial by experts who did not have the
experience, facilities or resources necessary to enable them to express reliable
opinions on some of the novel and complex scientific issues which arose for
consideration’.
The Northern Territory Supreme Court, sitting as a Court of Criminal Appeal
and acting on recommendations of the Morling Commission, quashed the con-
victions in 1988.28 However, the cause of death was not officially determined to be
due to a dingo taking Azaria until a fourth coronial inquest was completed in
2012.29 The legacy of the Chamberlain saga is arguably that courts have become
more willing to scrutinise forensic evidence, that forensic experts have improved

25 J. Silvester. (2008). Ross cleared of murder nearly 90 years ago. The Age. Retrieved
from http://www.theage.com.au/news/national/bcrimeb-man-cleared-of-murder-86-
years-after-he-was-executed/2008/05/26/1211653938453.html
26 Re Alice Lynne Chamberlain and Michael Leigh Chamberlain v R (1983) 72 FLR 1; Cham-
berlain v R (No. 2) (1984) 153 CLR 521. The High Court appeal failed with a 3:2
majority upholding the conviction.
27 Report of the Commissioner the Hon. Mr. Justice T.R. Morling / Royal Commission of Inquiry
into Chamberlain Convictions (1987), 340–1, cited by Warren (2009).
28 Reference under s.433A of the Criminal Code by the Attorney-General for the Northern Terri-
tory of Australia of Convictions of Alice Lynne Chamberlain and Michael Leigh Chamberlain
[1988] NTSC 64 (15 September 1988). Both Lindy and Michael Chamberlain were
pardoned in 1987, though this did not legally overturn the convictions.
29 Inquest into the death of Azaria Chantel Loren Chamberlain [2012] NTMC 020. The third
inquest, held in 1995, had returned an open finding.
122 Biometrics in criminal appeals and reviews

their processes and clarified that they must act impartially in assisting the court
rather than the prosecution and that both the judicial system and extra-judicial
means of review are arguably more willing to re-examine past cases to identify and
correct miscarriages of justice.

Edward Splatt
Edward Splatt was convicted of murder in 1978 and spent six and a half years in
prison before being released on the recommendation of a Royal Commission,
which was followed by an ex gratia payment of $300,000. The case against him was
circumstantial and largely based on scientific analysis of paint, wood, birdseed and
biscuit particles collected at the crime scene. Upon reviewing the case, the Royal
Commissioner concluded that it would be ‘unjust and dangerous for the verdict to
stand’ (ALRC, 1985; Dioso-Villa, 2014). The main reasons for this conclusion
were that the investigation and forensic analysis were conducted by the same police
officers, so that there was a lack of scientific objectivity and a reluctance to consider
exculpating rather than incriminating interpretations of the evidence.30 Following
this case, and the Chamberlain case in which South Australian forensic technicians
were also involved, forensic procedures were significantly reviewed and reformed.
In particular, expert guidelines now emphasise that:31

The role of the expert witness is to provide relevant and impartial evidence in
his or her area of expertise. An expert should never mislead the Court or
become an advocate for the cause of the party that has retained the expert.

This requirement for impartiality is supported by modern best practice in forensic


laboratories, including blind testing samples identified only by numbers and where
the analyst has no detail on the police investigation or prosecution involved.

Alexander McLeod-Lindsay
Alexander McLeod-Lindsay came home from his work one day in 1964 to find his
wife and son severely beaten. Both survived, and the wife described the attacker.
However, police suspected McLeod-Lindsay, and developed a theory that he had
slipped away from the hotel and returned there unnoticed after attacking his
family. Blood on his jacket was said to be ‘impact splatter’ that was deposited
during the attack. McLeod-Lindsay was convicted of attempted murder and served
almost ten years in prison before being released. Despite appealing to higher courts
for review, the convictions stood, despite expert scientists arguing that the blood
on the jacket displayed clotting, and therefore was most likely deposited when

30 B. Littley. (2012). Someone got away with murder. Adelaide Advertiser, 27 January.
31 See, for example, Federal Court of Australia, Expert Evidence Practice Note (GPN-EXPT),
25 October 2016.
Biometrics in criminal appeals and reviews 123

McLeod-Lindsay held his wife in his arms upon coming home to the horrific
scene.32 It was not until a further inquiry in 1990 that a final exoneration and
compensation were awarded by the state.33

Frank Button
The leading example in Australia of a DNA-based exoneration is the case of Frank
Button, in which the Queensland Court of Appeal quashed the defendant’s rape
conviction when presented with post-trial DNA analysis indicating that someone
other than Button was the rapist. The lead judgment stated:34

As I said in the course of argument, today is a black day in the history of the
administration of criminal justice in Queensland. The appellant was convicted
of rape by a jury and has spent some approximate 10 months in custody in
consequence of that conviction. DNA testing carried out at the insistence of
his lawyers after that jury verdict has now established that he was not the
perpetrator of the crime in question, and indeed the recent DNA testing
would appear to have identified some other person as the perpetrator of that
crime. What is of major concern to this Court is the fact that that evidence
was not available at the trial.
What is disturbing is that the investigating authorities had also taken pos-
session of bedding from the bed on which the offence occurred, and delivered
those exhibits to the John Tonge Centre. No testing of that bedding was
carried out prior to trial. The explanation given was that it would not be of
material assistance in identifying the appellant as the perpetrator of the crime.

The Director of Public Prosecutions referred to a lack of adequate resourcing of


the State’s main forensic facility. However, the Court of Appeal observed:

It may well be that laboratory testing is expensive, particularly if it is to be as


extensive as in my view it should be, but the cost to the community of that
testing is far less than the cost to the community of having miscarriages of
justice such as occurred here. The cost to the community in a case like this
includes not only the costs of both sides of the aborted trial, but the costs to
the appellant of the fact that he has been in custody for the length of time …

32 Report of the Inquiry held under Section 475 of the Crimes Act 1900 into the Conviction of
Alexander McLeod-Lindsay, 1969.
33 M. Brown, ‘Exonerated 26 years after his conviction’ (Sydney Morning Herald, 21 Sep-
tember 2009), written on the death of Alexander McLeod-Lindsay two days earlier.
34 R v Button [2001] QCA 133 (10 April 2001), (Williams JA, White and Holmes JJ con-
curring), perhaps Australia’s only DNA-based exoneration appeal (Roach 2015). The
judge’s words were adopted by an Australian Broadcasting Corporation documentary
about the case, ‘A Black Day for Justice’ (see discussion in Chapter 3 and in Smith
2015).
124 Biometrics in criminal appeals and reviews

This case illustrates that forensic science, including biometrics, can only be of
consistent and reliable assistance in criminal proceedings if analysis is conducted in a
comprehensive and scientifically robust manner. The risk otherwise is that mis-
carriages of justice will occur, and they may not always be amenable to remedial
justice through criminal appeals of other forms of post-conviction review.35

Ensuring the reliability of biometrics


A recurring theme in the cases discussed in this and the preceding chapter is the
need for the use of biometric identification to be premised on reliable scientific
techniques, applied in a consistent and verifiable manner in investigations. If sub-
standard techniques or even ‘junk science’ are allowed into the process, then the
results that follow may well be miscarriages of justice (Dioso-Villa et al., 2016).
Historically, the role of dubious forensic analysis has been highlighted, as well as
the fact that some processes and practices have been improved. This last topic
explores reforms on a systematic basis that relate specifically to biometric
identification.
Drawing on a landmark report into forensic science in the United States (NAS,
2009), commentators have identified the following as key problems affecting the
use of biometrics (Ross, 2012; Roux, Crispino & Ribaux, 2012; Edmond, Martire
& San Roque, 2011; Edmond, 2014, 2015):

 Validation of scientific techniques: While some new areas such as DNA identifi-
cation have been reasonably well validated through court cases assessing their
scientific basis, this is less true for newer techniques such as facial or body
mapping;
 Standard protocols: Not all types of biometric identification operate according to
clear and agreed processes for the collection and analysis of material e.g. voice
identification may be based on ad hoc expertise rather than a standard approach
across cases;
 Inaccuracy and bias: Conclusions that appear to be based on scientific analysis
may not disclose matters affecting their accuracy, the language used may be
highly technical without adding to the accuracy of the analysis, and sample
biases may not be disclosed where they are known.

Reforms have tended to focus on the accreditation of scientific laboratories and


training, with peer-reviewed research and validation required to be systematically
employed for quality assurance (Ross, 2012). Some legal academics have argued for
a greater judicial focus on reliability as a threshold requirement for the admissibility

35 Not discussed here are other noteworthy miscarriage of justice cases involving forensics,
such as those involving John Button and Andrew Mallard in Western Australia, and
Gordon Wood in New South Wales, as these cases did not rely on biometrics as the
principal means of identification relied on by the prosecution.
Biometrics in criminal appeals and reviews 125

of scientific evidence generally (Edmond, 2014; Ligertwood, 2015). In the context


of the rules governing the admission or exclusion of evidence, this means ensuring
that the relevance or probative value requirement, the rules allowing expert opi-
nion and the use of discretionary exclusion based on unfair prejudice need to be
applied carefully. The following assessment indicates that this is possible within
existing rules when appropriately interpreted (Ligertwood, 2015):

First, the admissibility rules relating to relevance, opinion and discretion are
open to interpretations permitting the rigorous consideration of forensic evi-
dence, to ensure that it is based on theoretical and/or empirical grounds and
that it is expressed transparently in a way that enables the trier of fact, with
appropriate directions from the trial judge, to take it rationally into account
when considering the criminal standard of proof.
Secondly, standards governing appellate review (including post-conviction
review) are open to interpretations that could ensure that forensic evidence is
carefully scrutinised on appeal, not only to determine its admissibility and use
but also in determining whether the criminal standard of proof has been
satisfied. Thirdly, the adversary process may be limited by time and resources
but it undoubtedly has the potential to provide a powerful scrutiny of forensic
evidence.
And finally, as far as the common lack of scientific expertise among the
judges and lawyers who must try to comprehend and evaluate forensic evi-
dence is concerned, one might argue that in many cases it is not necessary for
laypersons (judges and juries) to follow all the technicalities of a forensic pro-
cess and it is enough to appreciate the possibilities of error in determining
admissibility and proof. It is only where the very basis of scientific evidence is
being disputed that persons with a background in that area of science may be
required to adjudicate the dispute.

This suggests that it is within the capacity of the legal system, assisted by the for-
ensic sciences, to make the best use of biometrics in the courtroom, in criminal
trials and appeals, and in other forms of post-conviction review.

References
Auld, R. (2001). Review of the Criminal Courts of England and Wales. London: UK Stationery
Office.
Australian Capital Territory Department of Justice and Community Safety (ACT JACS).
(2015). Double Jeopardy Information Paper. Retrieved from http://www.justice.act.gov.au/
review/view/38/title/double-jeopardy-information-paper
Australian Law Reform Commission (ALRC). (2003). Essentially Yours: The Protection of
Human Genetic Information in Australia Report 96. Retrieved from http://www.austlii.edu.
au/au/other/lawreform/ALRC/2003/96.html
Australian Law Reform Commission (ALRC). (1985). Compensation for imprisonment.
Australian Law Reform Commission Reform Journal 39, 105.

You might also like