Chen
Chen
Chen
doi:10.1093/cercor/bhl054
Advance Access publication August 21, 2006
Symmetry is an important cue in face perception. We manipulated mirror symmetry are important in face perception. There is
Ó The Author 2006. Published by Oxford University Press. All rights reserved.
For permissions, please e-mail: journals.permissions@oxfordjournals.org
(Sasaki and others 2005; Tyler and others 2005a) and the 4. To examine whether the differential activation patterns
behavioral evidence for facial symmetry mechanisms (Rhodes from (3) are due to face-specific symmetry or to early image
and others 2005), we are interested in the role of symmetry in symmetry processing, we also contrasted front-view upright
brain areas responding to faces. We may conceptualize 2 types faces with their phase-scrambled versions that were verti-
of face symmetry. The first type, ‘‘image symmetry,’’ occurs cally symmetric. A contrast between faces and vertically
when one image component is a mirrored transform of another symmetric scrambled images should reveal all relevant areas
image component about some axis of transformation. This type supporting face processing that are insensitive to symmetry.
of symmetry specifies the direct spatial relations between the A comparison of the last 2 experiments should thus uncover
parts of the 2-dimensional (2D) projections of the faces and is whether areas that are attributed to face processing are
not concerned with the interpretation of the image as an object actually responding to the symmetry of the faces rather than
in space. The second type of symmetry is ‘‘object symmetry,’’ more informative cues.
which specifies the spatial relationships among the components 5. Finally, to study the role of object versus image symmetry
of the image interpreted as the representation of a 3D object. (under the assumption that faces are symmetric objects),
Figure 1. Examples of the stimulus types used. (a) Frontal face, (b) inverted face, (c) 3/4-view face, (d) symmetricized phase-zeroed face image, (e) phase-scrambled face image.
Figure 4. Activation maps for the face localizer (vs. Fourier-equated random images)
averaged across observers (a) on 3 views of the inflated brain and (b) occipital-pole
flatmaps. Yellow borders: V1--V3; magenta borders: LOC; red borders: hMT+; green Figure 5. Activation maps for the face responses versus symmetry-equated
borders: FFA; cyan borders: OFA; black borders: IOS; blue borders: symmetry scrambled averaged across observers (a) on 3 views of the inflated brain and
activation—Colored symbols indicated locations of activations in individual subjects (b) occipital-pole flatmaps. FFA: green borders; OFA: cyan borders; IOS: black borders;
from other studies (see key). Activation colors as in Figure 2. blue borders: symmetry activation. Other details as in Figure 2.
Object Structure
The upright frontal-view faces and the 3/4-view faces share the
same 3D object structure but not the same image symmetry.
Hence, a brain area for image symmetry would show differential
activation for these 2 types of images, whereas an area
processing symmetry based on the 3D object structure inferred
from the image information should respond equally to these 2
types of faces. Perceptually, we understand that the face is still
symmetric although it is 3D rotated, so there must be some
brain circuitry that carries this perceptual equivalence. This
analysis is supported by the finding that sparsely sampled
information about the positions of object features relies on an
exclusively 3D coding of object properties (Likova and Tyler
2003). The fact that the face retains the same object structure
when 3D rotated relative to the plane of projection can be
appreciated only by a neural circuit that has encoded the 3D
structure of the projected images.
The upright frontal-view faces and the 3/4-view faces showed
less differential activation (Fig. 3, row 4, also see Fig. 7 for
inflated and flat versions) than the face template areas Figure 7. Activation maps for frontal-view faces versus 3/4-view faces averaged
identified by faces versus inverted faces. The pseudocolor across observers (a) on 3 views of the inflated brain and (b) occipital-pole flatmaps. (c)
patches in Figure 7(c) identify the regions showing significant Flatmaps showing the regions of differential activation between faces and inverted
faces but not between faces and 3/4-view faces (yellow-orange coloration). FFA:
differential activation (F5,10 = 8.95, P = 0.03 < 0.05) between green borders; OFA: cyan borders; IOS: black borders; blue borders: symmetry
faces and inverted faces but not between faces and 3/4-view activation. Other details as in Figure 2.
faces. This reduction is quite pronounced in the FFA, as more
than 61% of voxels showing significant differential activation for the FFA and OFA is based on the 3D representation of the faces,
faces versus inverted faces (52% for the right and 87% for the which ‘‘look the same’’ to these neural circuits despite the
left) did not show significant activation in this condition. The radical change in image symmetry. The areas near the IOS and
activation modulation reduces from 0.49% for faces versus the MOG, however, still showed robust activation to the 3D
inverted faces to 0.20% for faces versus 3/4-view faces in the rotation of the faces as 65% (right) to 90% (left) of IOS voxels
left FFA (t (5) = 2.43, P = 0.029 < 0.05). A similar reduction was showed significant activation in this condition. This strong
observed in the right FFA as well (from 0.43% to 0.25%) though response implies that they are coding image symmetry rather
the difference is not statistically significant (t (5) = 0.79, P = 0.23 than object symmetry.
> 0.05, NS). To understand this activation, we may consider the factors
The activated area in the OFA was also substantially reduced that change versus those that are invariant in the image of a 3D-
(see Figs 6 vs. 7), although there were some spots still showing rotated face (Fig. 1a vs. 1c). The invariant factors are low-level
activation. The activation modulation was reduced from 0.49% ones such as the local edge structure and feature properties,
to 0.20% (t (5) = 2.43, P = 0.0296 < 0.05) for the left OFA and midlevel ones such as the 3D representation of the face
0.47% to 0.26% for the right OFA (t (5) = 2.28, P = 0.036 < 0.05) structure and high-level ones such as the emotional expression,
The implication of these results is that much of the coding in social position, and personal identity of the individual depicted.