Forensic Science International 356 (2024) 111935
Contents lists available at ScienceDirect
Forensic Science International
journal homepage: www.elsevier.com/locate/forsciint
The Doppelgänger effect? A comparative study of forensic facial
depiction methods
Kathryn Smith a, b, c, Caroline Wilkinson a, c, *
a
Centre for Anatomy & Human Identification, University of Dundee, DD1 4HN, UK
Department of Visual Arts, Stellenbosch University, Victoria Street, Stellenbosch 7600, South Africa
c
Face Lab, G05 Aquinas Building, Liverpool John Moores University, L1 5DE, UK
b
A R T I C L E I N F O
A B S T R A C T
Keywords:
Facial
Depiction
Comparative
Forensic
Reproducibility
This study attempted to assess the reproducibility of 2D and 3D forensic methods for facial depiction from
skeletal remains (2D sketch, 3D manual, 3D automated, 3D computer-assisted). In a blind study, thirteen
practitioners produced fourteen facial depictions, using the same skull model derived from CT data of a living
donor, a biological profile and relevant soft tissue data. The facial depictions were compared to the donor subject
using three different evaluation methods: 3D geometric, 2D face recognition ranking and familiar resemblance
ratings. Five of the 3D facial depictions (all 3D methods) demonstrated a deviation error within ± 2 mm for ≥
50% of the total face surface. Overall, no single 3D method (manual, computer assisted, automated) produced
consistently high results across all three evaluations. 2D comparisons with a facial photograph of the donor were
carried out for all the 2D and 3D facial depictions using four freely available face recognition algorithms
(Toolpie; Photomyne; Face ++; Amazon). The 2D sketch method produced the highest ranked matches to the
donor photograph, with overall ranking in the top six. Only one 3D facial depiction was ranked highly in both the
3D geometric and 2D face recognition comparisons. The majority (67%) of the facial depictions were rated as
limited or moderate resemblance by the familiar examiner. Only one 2D facial depiction was rated as strong
resemblance, whilst two 2D sketches and two 3D facial depictions were rated as good resemblances by the
familiar examiner. The four most geometrically accurate 3D facial depictions were only rated as limited or
moderate resemblance to the donor by the familiar examiner. The results suggest that where a consistent facial
depiction method is utilised, we can expect relatively consistent metric reliability between practitioners. However, presentation standards for practitioners would greatly enhance the possibility of recognition in forensic
scenarios.
1. Introduction
Facial approximation/reconstruction is described as the depiction of
the living face of an individual through interpretation of skeletal
morphology from human remains. For the purposes of this article the
authors will use the term ‘facial depiction’ to describe all 3D and 2D
facial approximation/reconstruction methods. The accuracy and
reproducibility of forensic facial depiction techniques continues to be
debated and accuracy studies vary in their research design and relevance
to forensic application [1–5]. The demands of facial depiction differ
somewhat when applied to a forensic or historical case.
In the forensic context, facial depiction is usually attempted when
other identification leads have failed and is predicated on producing a
facial depiction that would be recognisable to someone familiar with the
individual in life. In these cases, facial depiction is not an identification
method, but rather an investigative method. It is assumed that creative
interpretation by the artist will remain sensitive to the limits of accepted
feature prediction standards. ‘Success’ is generally based on achieving
identification in forensic investigation, yet ‘success’ and ‘accuracy’ can
be mutually exclusive, as other factors may contribute to whether an
investigation leads to identification or not, and by their nature, unsuccessful cases cannot be evaluated. In addition, it is unclear what elements of a forensic facial depiction are relevant to familiar recognition
and, based on previous research [6], it is possible that correct facial
morphology is not the most important factor for successful identification. Current psychology literature suggests that we process faces
* Corresponding author at: Face Lab, G05 Aquinas Building, Liverpool John Moores University, L1 5DE, UK.
E-mail address: c.m.wilkinson@ljmu.ac.uk (C. Wilkinson).
https://doi.org/10.1016/j.forsciint.2024.111935
Received 13 June 2023; Received in revised form 11 September 2023; Accepted 14 January 2024
Available online 18 January 2024
0379-0738/© 2024 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/bync/4.0/).
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
holistically, rather than by individual features; and that we process internal and external features of familiar and unfamiliar faces differently
[7]. That is to say, we are as able to recognise someone we know well by
their external features (ears, hair etc.) as we are by their internal features. For someone we have not seen before, or we have just met, our
focus reads external features of the face. Barring alternation through
surgery, temporary disguise, disease, lifestyle or aging over time, our
internal features are relative constants, whereas an external feature like
hair, or contextual factors like lighting, can radically alter physical
appearance. Some practitioner guidelines for presentation have been
published [8], consolidating factors and principles gleaned from a
wide-range of face-based studies.
Currently, practitioners use a range of facial depiction techniques
that can broadly be grouped as either 2D or 3D, and then further specified as either manual, computer-assisted and automated [9]. Further,
practitioners may employ one or more methods whether working twoor three- dimensionally. Most manual and computer-assisted practitioners utilise anatomical standards and anthropometry, and all
methods utilise average soft tissue data [10–14]. All methods rely on the
assessment of a biological profile (sex, age, ancestry) and any material
found with the remains (clothing, jewellery, hair and so on) may provide
additional information (e.g., body mass index, hair colour), which can
be included in the final depiction to aid recognition. Several studies have
attempted to identify best-practice protocols [15] and others have made
a significant contribution in collecting soft tissue data for a wide range of
population groups [16–18]. Subjective interpretation is cautioned
against [19], and several researchers call for standardisation of soft
tissue depth data [20] There has been a shift towards full automation in
the 21st century, to try and mitigate subjective interpretation and
improve effectiveness. However, automated systems are only as good at
making faces as the database from which they are derived [21]. For
systems based on clinical imaging databases [22] computer-assisted
faces may have closed eyes, imaging artefacts and postural/equipment
deformations. These factors can reduce morphological accuracy,
decrease recognition and diminish believability (looking like a mask
rather than a real face). Automated and practitioner-led methods each
have their pros and cons, and these are outlined in the literature [23,24].
Historically, attempts to assess accuracy have suffered from poor
donor data (e.g., cadavers, death masks, low quality ante-mortem images) [25]. Advances in 3D clinical imaging technology have provided
novel and detailed ways to evaluate the accuracy of facial depiction
methods by utilising living donor material (craniofacial data) and researchers have quantitatively evaluated morphological accuracy using
3D geometric comparison software [26,27]. The superimposition of 3D
face models has been studied for identification of the living [28] and
results show high matching ability and high repeatability.
Other studies have assessed facial depiction accuracy using qualitative methods, such face pool recognition, resemblance ratings and face
recognition software [29–31]. Whilst accuracy studies have been
numerous, there has been a lack of evaluation of the reproducibility of
facial depiction practice and the variation in facial depiction outcomes.
Some previous comparative studies evaluated the use of different tissue
depth datasets on the same skull [32–34] rather than practitioner or
method variation. A large-scale practitioner comparative study, from the
RSFP2005 conference [35,36], was disappointing, as it utilised unidentified remains and therefore, merely evaluated the level of believability of the faces produced. A more recent comparative study [37]
utilised CT data from a donor, 2 manual practitioners and 2 computerised systems (FaceIT and ReFace) and compared each facial depiction
to the donor face visually and geometrically. This study showed variability in accuracy with 61–76% of the surfaces ± 5 mm error, and while
each depiction demonstrated aspects of the face correctly, inaccuracies
were exhibited at the chin area, ears, and nasal region. However,
although this study evaluated the level of accuracy of each method, it
did not attempt to evaluate the resemblance of the depictions to the
donor.
The ideal reproducibility study should set out to compare quantitative morphological accuracy alongside qualitative visual likeness/
resemblance, since morphological accuracy and physical likeness may
not be directly correlated.
2. Materials and methods
The authors invited a number of experienced practitioners based
around the world to participate in a comparative analysis of facial
depiction methods. This was a double-blind study, with photographs of
the donor face only revealed to researchers and practitioners once all
depictions were submitted. Thirteen practitioners from seven countries
(UK, USA, Netherlands, South Africa, New Zealand, Hungary, Belgium)
took part in the study and fourteen facial depictions were produced
using 2D sketch (n = 3), 3D manual (n = 8), 3D computer-assisted (n =
2) and 3D automated (n = 1) methods.
A 3D skull model was produced from DICOM (Computed Tomography) data donated by a living individual (middle-aged US male of European ancestry). This biological profile was supplied to practitioners
along with an appropriate set of average soft tissue depths (Helmer,
1984) and instructions to produce a facial depiction (head and neck)
without hair or expression (in frontal view if 2D). All thirteen practitioners returned at least one facial depiction, with one practitioner
contributing two depictions (2D and 3D). Of these, eleven were 3D
(manual, automated and computer-assisted) and three were 2D (manual
sketch). Facial depictions were received either as digital images of 2D
sketches, 3D scans of sculptures, 3D computer-generated models, or
physical sculptures. Physical sculptures were scanned by the authors
using a Polhemus Scorpion hand-held laser scanner. Scans were viewed
in FastScan and converted to.obj files for use in Geomagic Freeform
Modelling Plus.
3. Analysis
The morphological accuracy of each facial depiction was evaluated
using two methods: a 3D geometric comparison and a 2D image
comparison.
3.1. 3D geometric comparison
The 3D facial depictions were metrically compared to the 3D donor
CT model. The CT data for the donor was collected with the donor lying
in a supine position, and therefore we can assume that his face shape was
affected by gravity. Previous research suggests that postural changes to
the face in this position can affect all features of the face, except the
nose, and these effects are most marked at the lateral cheeks and jawline
[38] and are greater in older (>50 years) than younger (20–30 years)
individuals [39]. In addition, the donor CT model demonstrated closed
eyes, whilst all but one (O) of the 3D facial depictions demonstrated
open eyes. These factors will affect the 3D geometric comparisons and
must be considered when interpreting the results, as 3D forensic facial
depictions are usually produced with the face in an upright position and
with open eyes, to optimise recognition.
The computer-assisted and automated facial depictions were presented as 3D files including the skull model as a separate digital layer.
Therefore, these facial depictions could be aligned to the donor CT
model in Freeform Modelling Plus using the skull for common registration. The scans of the 3D manual facial depictions were visually
aligned to the pegged skull by the researcher using the eyeballs, external
auditory meati and nasal root as registration points (see Fig. 1). Once
achieved, the 3D manual facial depictions could then be aligned to the
donor CT model using the skull for common registration (see Fig. 2).
In Freeform Modelling Plus, the donor CT model was cropped to
three planes superiorly, posteriorly and inferiorly. Once aligned, each
3D facial depiction was cropped to the same planes as the donor model.
Therefore, each the 3D model consisted of 3 flat sides and a facial
2
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
Fig. 1. Screenshots of a 3D facial depiction (C) aligned with the pegged skull.
Fig. 2. Screenshots of a 3D facial depiction (F - light) aligned to the donor CT
model (dark).
Fig. 3. Screenshot of 3D donor model cropped to planes.
donor demonstrated a non-smiling face with a beard. Facial hair will
reduce the ability to accurately locate landmarks around the mouth and
jawline.
2D image comparisons were carried out between the donor face and
each facial depiction using four freely available face recognition algorithms (Toolpie1; Photomyne2; Face ++3; Amazon4). Each software
delivers a percentage match between two faces. Two different facial
images of the donor (one smiling and one non-smiling) were also
compared to each other using each face recognition software.
The likeness/resemblance to the donor of each facial depiction was
also evaluated using qualitative familiar face assessment:
surface, rendering the entire dataset more practically comparable (see
Fig. 3).
The 3D geometric comparison was carried out using Geomagic
Qualify 2013 software, which compares two 3D surfaces using shell-toshell deviation maps. The geometric accuracy of each facial depiction
was assessed by mapping its deviation (error measurement) from the
surface of the donor CT model and calculating the percentage of the
surface of the facial depiction that demonstrated a maximum deviation
(error) of ± 2 mm. The flat planes were excluded from the surface
comparison so that only the facial surfaces were compared.
3.2. 2D facial image comparison
1
https://www.toolpie.com/
https://photomyne.com/
3
https://www.faceplusplus.com/
4
https://aws.amazon.com/rekognition/the-facts-on-facial-recognition-withartificial-intelligence/
A facial depiction image dataset (see Fig. 4) was created including
frontal views of the fourteen facial depictions on black backgrounds, and
a frontal photograph of the donor (see Fig. 5) was compared to the facial
depiction image dataset. The black and white, frontal photograph of the
2
3
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
Fig. 4. The facial depiction image dataset, including the donor CT model (L).
4
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
the 2D face recognition systems. Only one 3D facial depiction (N) was
ranked highly in both the 3D geometric and 2D face recognition
comparisons.
4.3. Resemblance assessment
The examiner did not rate any of the facial depiction dataset
(including the donor CT model) at the highest and lowest resemblance
tier (see Table 1). The donor’s own CT model was only rated as limited
resemblance to the donor by the familiar examiner. Only one 2D sketch
(D) was rated as strong resemblance and this depiction also ranked
highly (#2) in the 2D face recognition comparison. Two 2D sketches (A,
M) and two 3D facial depictions (H, J) were rated as good resemblances,
and three of these (A, J, M) also ranked highly in the 2D face recognition
comparisons. The majority (67%) of the facial depictions were rated as
limited or moderate resemblance. One 3D facial depictions (H - manual)
with low geometric error was rated as a good resemblance, and one 3D
facial depiction (J – manual) with high geometric error was rated as a
good resemblance. The four most geometrically accurate 3D facial depictions were only rated as limited or moderate resemblance to the
donor.
Fig. 5. The non-smiling facial photograph of the donor.
3.3. Resemblance assessment
The facial depiction dataset (including the donor CT model) was sent
to an examiner who was very familiar with the facial appearance of the
donor. A familiar examiner was utilised in order to replicate a typical
forensic scenario where the target recognisers are family members and/
or friends of the deceased. The examiner scored each facial depiction
according to a six-tier rating system:
0 = no resemblance.
1 = limited resemblance.
2 = moderate resemblance.
3 = good resemblance.
4 = strong resemblance.
5 = very strong resemblance.
5. Discussion
The facial depiction image database demonstrated variation with
respect to individual facial features and, in some cases, face shape. This
is similar to other comparative studies where practitioners demonstrated varying degrees of sculptural ability, anatomical modelling and
facial feature prediction [32–34].
Five (45%) of the 3D depictions recorded the majority of their facial
surface with ± 2 mm error when compared to the donor CT model, and
no 3D depiction recorded less than a third of its surface at ± 2 mm error
when compared to the donor CT model. This suggests that the 3D facial
depictions ranged around moderate morphological accuracy, and that
facial depiction methods are more reproducible than demonstrated in
the other quantitative comparative study [34], where the facial depictions deviated from the donor face at more than double this level
( ± 5 mm).
Most 3D facial depictions demonstrated an underestimation of tissue
at the lateral cheeks and an overestimation around the mouth, and this is
likely due to postural changes to the donor’s face in the CT scanner. This
result was also seen in the previous quantitative comparative study [34]
where CT scan data from a supine donor was also utilised.
Most 3D facial depictions demonstrated higher error at the eyes due
to the closed eyes on the donor CT model. However, the 3D automated
system produced a facial depiction (O) with closed eyes demonstrating
low error in this region. The 3D automated facial depiction (O)
demonstrated the lowest deviation error to the donor CT model. The
database on which this system was designed was derived from CT data of
living donors, so this facial depiction is likely to be least affected by the
postural changes associated with a supine position. However, when
compared to the donor photograph this facial depiction (O) performed
poorly (#9) and was also rated as only limited resemblance to the donor
by the familiar examiner.
One 3D facial depiction (N) was ranked highly in both 3D geometric
and 2D face recognition comparisons, but this was also only rated as
limited resemblance by the familiar examiner. The donor CT model
ranked the highest in both 3D geometric and 2D face recognition comparisons, but this was also only rated as limited resemblance by the
familiar examiner. These results highlight the challenges associated with
clinical image data and the complex correlation between craniofacial
morphology and resemblance.
Overall, no single 3D method (manual, computer assisted, automated) produced consistently high results across all three evaluations
(3D geometric, 2D face recognition, familiar resemblance).
It is worth noting that the 2D facial depictions (unlike the 3D facial
4. Results
4.1. 3D Geometric comparison
Five 3D facial depictions (B, F, H, N, O) showed at least 50% of the
surface with ± 2 mm deviation from the donor CT model (see Fig. 6 and
Table 1). Of these, three were produced using a manual method, one
using a computer-assisted method and one using an automated system.
All 3D facial depictions demonstrated the highest accuracy at the forehead, cheek and chin regions. The mouth area and sides of the face
demonstrated the most error for the majority of 3D facial depictions.
4.2. 2D Face Recognition Comparison
All facial depictions were compared to the non-smiling donor
photograph using four different (Toolpie, Photomyne, Amazon, Face
++) face recognition systems that produced a percentage match to the
donor and a ranking from best to worst (1−15) match. An overall
ranking was calculated for each facial depiction as an average of the four
face recognition system rankings (see Table 1).
Each face recognition system matched the non-smiling donor
photograph to itself by > 97%, and the smiling donor photograph to the
non-smiling donor photograph by > 83%. The donor CT model was
ranked the overall highest match to the donor photograph and two facial
depictions (D, M – both 2D sketches) received the same or greater match
percentage than the donor CT model.
No facial depiction was matched to the donor at more than 54% by
any face recognition system. The Amazon face recognition system
consistently matched the facial depictions to the donor at a lower rate
(0−22) than the other face recognition systems, rating only the CT donor
model (L) at more than 1% match to the donor (22%).
The 2D sketches (A, D, M) and three 3D facial depictions (C, J, N)
were ranked the overall highest matches to the donor photograph using
5
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
Fig. 6. Geometric deviation maps comparing each 3D facial depiction with the donor CT model. Green and yellow areas/Paler areas = ± 2 mm deviation error; blue
and red areas/Darker areas = ± > 5 mm deviation error.
depictions) were presented with skin/hair textures (these details cannot
be predicted from skeletal remains), and research [6] shows that people
respond more positively to textured faces, creating a bias towards 2D
sketches due to their innate ability to appear more ‘realistic’. This is
demonstrated in these results, as all the 2D sketches recorded consistently high 2D face recognition rankings and resemblance ratings.
The use of a familiar examiner mitigated the confusing effect of the
beard on the donor images, as the examiner was very familiar with the
donor’s face in multiple different scenarios and presentations. This
would be similar to a forensic investigation where the recognition targets are family and friends.
Whilst the number of donor data in this study was limited (n = 1),
the number of practitioners was large (n = 13) and a variety of modes of
production (n = 4) and evaluation (n = 3) were considered. The inclusion of experience practitioners was key to this research, as previous
comparative studies had utilised inexperienced practitioners or an
6
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
Table 1
Summary of facial depiction evaluation as compared to the donor using 3D geometric analysis, 2D face recognition (FR) match and familiar resemblance assessment.
Facial
depiction
Method
A
B
C
D
E
F
G
H
I
J
K
L
2D sketch
3D manual
3D manual
2D sketch
3D manual
3D manual
3D computer
3D manual
3D computer
3D manual
3D manual
donor CT
model
2D sketch
3D computer
3D
automated
Donor
neutral
Donor
smiling
M
N
O
1
2
±2 mm 3D surface
error (%/rank)
Toolpie FR
match (%/rank)
Photomyne FR
match (%/rank)
Amazon FR
match (%/rank)
Face + + FR
match (%/rank)
Overall 2D FR
match rank
Resemblance
Assessment
35/12
55/3
37/11
50/4 =
49/7
42/9
41/10
100/1
28/6 =
20/11 =
30/5
43/1
9/15
28/6 =
20/11 =
20/11 =
21/10
39/3 =
39/3 =
40/2
30/4 =
6/11
28/7
38/1
0/12 =
20/9
16/10
27/8
0/12 =
29/6
0/12 =
34/2 =
0.6/3 =
0/14 =
0.8/2
0.2/8 =
0/14 =
0.1/11 =
0.1/11 =
0.3/6 =
0.1/11 =
0.6/3 =
0.5/5
22/1
36/9
27/13
42/5
54/1
26/14
35/10
34/11
39/7 =
21/15
47/4
39/7 =
53/2
6=
12 =
5
2=
14
10
11
9
12 =
4
8
1
3-good
1-limited
2-some
4-strong
2-some
2-some
2-some
3-good
1-limited
3-good
1-limited
1-limited
50/4 =
59/2
28/6 =
28/6 =
17/14
34/2 =
33/4 =
0/12 =
0.2/8 =
0.2/8 =
0.3/6 =
49/3
33/12
41/6
2=
6=
9
3-good
1-limited
1-limited
100
100
100
97
90
83
100
88
50/4 =
45/8
CRediT authorship contribution statement
unknown donor [35–37], therefore, the researchers prioritised the
contribution of the practitioners over the number of donor data. The
limited donor sample may mean that this study is not wholly representative of the forensic application of facial depiction, and further
comparative studies would be valuable.
Kathryn Smith: Conceptualization, Methodology, Contributor as
participant, Evaluation, Writing- Reviewing and Editing. Caroline Wilkinson: Co-design, Evaluation, Writing- Original draft preparation,
Writing- Reviewing and Editing.
6. Conclusion
Declaration of Competing Interest
This study appears to be the first large-scale comparative study
where three different assessment protocols were utilised to compare the
facial depictions across practitioners, across modes and to the donor
face.
The results suggest that where a consistent method and application
of soft tissue data is utilised, we can expect relatively consistent metric
reliability between practitioners.
However, a visual assessment of the facial depictions reveals significant differences in the interpretation of facial features and, in some
cases overall face shape, across the group.
The addition of textures, such as hair, skin detail, open eyes and
facial hair, has a significant effect on the resemblance of a facial
depiction to a living individual leading to enhanced face recognition
ranking and resemblance ratings for the 2D sketches. These results
suggest that practitioner presentation standards would greatly enhance
the possibility of recognition in forensic scenarios. Suggested presentation guidance includes:
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to influence
the work reported in this paper.
Acknowledgments
Thanks to Dr Vassilios Raptopoulos (Beth Israel Deaconess Medical
Centre/Harvard Medical School Teaching Hospital) for providing his CT
scan data. Special thanks to Dr Chris Rynn, Greg Mahoney, Ronn Taylor
and all the practitioners for their contributions and subjecting their work
to analysis. Additional thanks to Jason Pallett and Dr Jessica Ching Liu
for face recognition software access.
References
[1] C.M. Wilkinson, Forensic Facial Depiction, Cambridge University Press,
Cambridge, 2004.
[2] J. Prag, R.A.H. Neave, Making Faces, British Museum Press, London, 1999.
[3] C.M. Wilkinson, Facial depiction – anatomical art or artistic anatomy? J. Anat. 216
(2010) 235–250, https://doi.org/10.1111/j.1469-7580.2009.01182.x.
[4] W.D. Haglund, D.T. Reay, Use of facial depiction techniques in identification of
Green River serial murder victims, Am. J. Forensic Med. Pathol. 12. 2 (1991) 132.
[5] C.N. Stephan, M. Henneberg, Building faces from dry skulls: are they recognized
above chance rates? J. Forensic Sci. 46 (3) (2001) 432–440.
[6] V. Bruce, P. Healey, M. Burton, T. Doyle, A. Coombes, A. Linney, Recognising facial
surfaces, Perception 20 (6) (1991) 755–769.
[7] R.A. Johnston, A.J. Edmonds, Familiar and unfamiliar face recognition: a review,
Memory 17 (5) (2009) 577–596, https://doi.org/10.1080/0965821090297696.
[8] S. Davy-Jow, The devil is in the details: a synthesis of psychology of facial
perception and its applications in forensic facial depiction, Sci. Justice 53 (2013)
230–235, https://doi.org/10.1016/j.scijus.2013.01.004.
[9] C.M. Wilkinson, Computerized forensic facial depiction: a review of current
systems, Forensic Sci., Med., Pathol. V1–3 (2005), https://doi.org/10.1385/
Forensic Sci. Med. Pathol, 1:3:173.
[10] M.M. Gerasimov, The Face Finder, Hutchinson, London, 1971.
[11] K.T. Taylor, Forensic Art and Illustration, CRC, Boca Raton, 2001.
[12] C.C. Snow, B.P. Gatliff, K.C. McWilliams, Depiction of facial features from the skull:
an evaluation of its usefulness in forensic anthropology, Am. J. Phys. Anthr. 33
(1970) 221–228.
1. Open eyes with realistic iris presentation.
2. Addition of textures (skin detail, hair, facial hair, etc.) to a 2D image
of the 3D model – multiple versions or blurred external textures may
be preferable where these details are unknown.
3. Include eyebrows.
4. Orthogonal frontal view with head in FHP - additional views may be
presented where characteristic features are present.
The relationship between metric accuracy and visual resemblance
continues to be a challenge for this field. The most faithful face shape is
not necessarily the most effective, and the character of individual features and the overall texture of the face can have an alarmingly strong
effect upon recognition. Further research is necessary to determine the
optimal presentation methods for forensic facial depictions.
7
K. Smith and C. Wilkinson
Forensic Science International 356 (2024) 111935
[13] Krogman, W.M. and Iscan, M. The Human Skeleton in Forensic Medicine. Thomas,
Illinois, 1986.
[14] Mahoney, G. and Wilkinson, C.M. Computer-generated facial depiction. Cpt in:
Wilkinson, C. and Rynn, C. (eds.) Craniofacial Identification. Cambridge University
Press, Cambridge, 2012.
[15] C.N. Stephan, M. Henneberg, Recognition by forensic facial depiction: case specific
examples and empirical tests, Forensic Sci. Int. 156 (2006) 182–191.
[16] C.M.S. Fernandes, M. da Costa Serra, J.V.L. Da Silva, P.Y. Noritomi, F.D.A. de Sena
Pereira, R.F.H. Melani, Tests of one Brazilian facial depiction method using three
soft tissue depth sets and familiar assessors, Forensic Sci. Int. 214 (2012) 211.
e1–211.e7.
[17] S. De Greef, P. Claes, D. Vandermeulen, W. Mollemans, P. Suetens, G. Willems,
Large-scale in-vivo Caucasian facial soft tissue thickness database for craniofacial
depiction, Forensic Sci. lnternational 159S (2006) S126–S146, https://doi.org/
10.1016/j.forsciint.2006.02.034.
[18] D. Cavanagh, M. Steyn, Facial depiction: Soft tissue thickness values for South
African black females, Forensic Sci. Int. 206 (2011) 215.e1–215.e7.
[19] C. Stephen, I.S. Penton-Voak, J.G. Clement, M. Henneberg, Ceiling Recognition
Limits of Two-Dimensional Facial Depictions Constructed Using Averages, Chapter
11, in: J.G. Clement, M.K. Marks (Eds.), Computer-Graphic Facial Depiction,
Elsevier, Massachusetts, 2005, pp. 199–219. Chapter 11.
[20] C.N. Stephan, E.K. Simpson, Facial soft tissue depths in craniofacial identification
(part I): an analytical review of the published adult data, J. Forensic Sci. 53 (6)
(2008) 1257–1271, https://doi.org/10.1111/j.1556-4029.2008.00852.x.
[21] P. Claes, D. Vandermeulen, S. De Greef, G. Willems, P. Suetens, Craniofacial
depiction using a combined statistical model of face shape and soft tissue depths:
methodology and validation, Forensic Sci. Int. 159S (2006) S147–S158, https://
doi.org/10.1016/j.forsciint.2006.02.035.
[22] J.G. Clement, M.K. Marks, Computer-Graphic Facial Depiction ([Kindle edition]),
Elsevier, Massachusetts, 2005.
[23] S. Davy, T. Gilbert, D. Schofield, M. Evison, Forensic facial depiction using
computer modeling software. Chapter 10 ([Kindle edition]), in: J.G. Clement, M.
K. Marks (Eds.), Computer-Graphic Facial Depiction, Elsevier, Massachusetts,
2005.
[24] R.P. Helmer, S. Rohricht, D. Petersen, F. Mohr, Assessment of the reliability of
facial depiction, in: M.Y. Iscan, R.P. Helmer (Eds.), Forensic analysis of the skull,
Wiley-Liss, New York, 1993.
[25] Stratomeier, H., Spee, J., Wittwer-Backofen, U. and Bakker, R. Methods of forensic
facial reconstruction. Proceedings of 2nd International Conference on
Reconstruction of Soft Facial Parts (RSFP), Remagen, Germany, (2005) 19–20.
[26] C.M. Wilkinson, C. Rynn, H. Peters, M. Taister, C.H. Kau, S. Richmond, A blind
accuracy assessment of computer-modeled forensic facial depiction using CT data
from live subjects, Forensic Sci., Med. Pathol. 2 (3) (2006) 179–187.
[27] W.J. Lee, C.M. Wilkinson, H.S. Hwang, An accuracy assessment of forensic
computerized facial depiction employing cone-beam computed tomography from
live Subjects, J. Forensic Sci. 57 (2) (2012), https://doi.org/10.1111/j.15564029.2011.01971.x.
[28] D. Gibelli, A. Palamenghi, P. Poppa, C. Sforza, C. Cattaneo, D. De Angelis,
Improving 3D–3D facial registration methods: Potential role of three-dimensional
models in personal identification of the living, Int. J. Leg. Med. 135 (6) (2001)
2501–2507.
[29] C.M. Wilkinson, D.K. Whittaker, Juvenile forensic facial reconstruction—a detailed
accuracy study, Proc. 10th Conf. Int. Assoc. Craniofacial Identif. (2002) 11–14.
[30] G. Quatrehomme, T. Balaguer, P. Staccini, V. Alunni-Pettret, Assessment of the
accuracy of a three-dimensional manual craniofacial depiction: a series of 25
controlled cases, Int J. Leg. Med 121 (2007) 469–475.
[31] C.M.S. Fernandes, M. da Costa Serra, J.V.L. Da Silva, P.Y. Noritomi, F.D.A. de Sena
Pereira, R.F.H. Melani, Tests of one Brazilian facial depiction method using three
soft tissue depth sets and familiar assessors, Forensic Sci. Int. 214 (2012) 211.
e1–211.e7.
[32] Wilkinson, C.M., Neave, R.A.H. and Smith, D. How important to facial
reconstruction are the correct ethnic group tissue depths? In: Proceedings of the
10th meeting of the International Association for Craniofacial Identification (2002,
September) 11–14.
[33] D. Nilendu, A. Johnson, Role of soft-tissue thickness on the reproducibility in
forensic facial approximation: a comparative case study, Forensic Sci. Int.: Rep. 7
(2023) 100293.
[34] N. Briers, M. Steyn, Re-assessment of South African juvenile facial soft tissue
thickness data for craniofacial approximation: a comparative analysis using central
tendency statistics, Forensic Sci. Int. 291 (2018) 280–e1.
[35] Bongartz, J., T.M. Buzug, R. Helmer, P. Hering, H. Seitz and C. Tille. ‘Introduction
to the Comparative Study of Facial Depictions.’ Book of Abstracts: 2nd
International Conference on Depiction of Soft Facial Parts 2005: 56.
[36] Bongartz, J. and T. Buzug. ‘Evaluation of the RSFP 2005 comparative study using
facial recognition systems.’ Book of Abstracts: 3rd International Conference on
Depiction of Soft Facial Parts 2006: 2–3.
[37] S. Decker, J. Ford, S. Davy-Jow, P. Faraut, W. Neville, D. Hilbelink, Who is this
person? A comparison study of current three-dimensional facial depiction methods,
Forensic Sci. Int. 229 (2013) 161.e1–161.e8.
[38] U. Ozsoy, R. Sekerci, E. Ogut, Effect of sitting, standing, and supine body positions
on facial soft tissue: detailed 3D analysis, Int. J. Oral. Maxillofac. Surg. 44 (2015)
1309–1316.
[39] J.W. Park, M. Lee, J. Kim, E. Kim, Quantitative evaluation of facial sagging in
different body postures using a three-dimensional imaging technique, J. Cosmet.
Dermatol. 20 (8) (2021) 2583–2592, https://doi.org/10.1111/jocd.13880.
8