The following is a pre-proof of a chapter that is scheduled to appear in
June 2021 as
Forceville, Charles (forthc. 2021) “Multimodality” (= chapter 26).
In: Xu Wen & John R. Taylor (eds), The Routledge Handbook
of
Cognitive
Linguistics
(pp.
676-687)
https://www.routledge.com/The-Routledge-Handbook-ofCognitive-Linguistics/Xu-Taylor/p/book/9781138490710
Charles Forceville
Orcid ID: orcid.org/0000-0002-6365-500X
Dept. of Media Studies
University of Amsterdam
Turfdraagsterpad 9
1012 XT Amsterdam
The Netherlands
Tel. +31 20 525 4596/2980
E-mail: c.j.forceville@uva.nl
Forceville – Multimodality 2
Abstract. While it is crystal clear that communication can draw on many
semiotic resources, research in the humanities has hitherto strongly
focused on its verbal manifestations. “Multimodality” labels a variety of
approaches and theories trying to remedy this bias by investigating how for
instance visuals, music, and sound contribute to meaning-making. The
contours of what is developing into a new discipline begin to be discernible.
This chapter provides a brief survey of various perspectives on
multimodality, addresses the thorny issue of what should count as a mode,
and makes suggestions for further development of the fledgling discipline.
1 Introduction
Multimodality research is on the rise, but discussions as to what exactly it
is, what should count as a mode, and whether it is desirable or even
possible to decide on an exhaustive list of modes are hotly debated issues.
Consequently, this chapter cannot but have a highly provisional, rather unhandbook-like status.
What is uncontroversial is that multimodality, however defined, gained
scholarly currency because of the growing awareness that the emphasis on
language as traditionally studied in linguistics fails to do justice to the fact
that communication often is not, or not exclusively, verbal. Even
supposedly monomodal spoken and written language, after all, manifests
meaningful dimensions that have long been ignored or downplayed. Written
texts are characterized by visually reinforced markers such as CAPITALIZED
HEADINGS, bolding, italicizing, indentation, and white lines. Nowadays,
Word-wielding scholars realize they need to choose fonts and font sizes for
their texts, i.e., they need to consider the visual dimension of their
predominantly or exclusively monomodal verbal discourses (see e.g., Van
Leeuwen 2006; Stöckl 2005, 2014a). But in some types of texts, these
visual features assume much more weight. Dada art and Concrete Poetry
have played with the visual qualities of letters, and sometimes turned them
into “things.” Moreover, lay-out undoubtedly imposes a visual dimension
on the written-text mode (e.g., Bateman 2008; Hiippala 2015), creatively
exploited already in George Herbert’s poems “The Altar” and “EasterWings” (both from 1633). Spoken language invites attention to a voice’s
timbre, pitch, and loudness, to facial expressions and body postures, and
to accompanying gestures. Some scholars would therefore claim that all
communication is multimodal – monomodal communication constituting, at
best, a convenient theoretical construct. Consequently, there is a need for
systematically researching non-verbal ways for conveying information, and
for investigating information that consists of an ensemble of more than one
meaning-making semiotic resource.
But that is about where agreement ends. Some of the studies currently
presented under the banner of multimodality would in an earlier era have
been labeled “semiotics” – and much of this work remains pertinent (see
Forceville – Multimodality 3
Chandler 2017; Stöckl 2014b). Semiotic approaches deserve credit for
having initiated the study of non-artistic static visuals that take pride of
place in multimodality studies. In fact, Barthes (1986 [1964]) anticipated
multimodality research when he claimed that a written text in a static word
& image discourse either draws attention to aspects of meaning that are,
albeit perhaps latently, already present in the image that it accompanies
(i.e., the language anchors the image); or it presents information that
complements dimensions of meaning in the image (i.e., the language relays
the image). Refinements and alternatives have been proposed since (e.g.,
Unsworth and Clérigh 2014; see also Bateman 2014). Forceville (1996: 73)
suggests that images can anchor written text as well as vice versa, and
that the borders between anchoring and relaying are fuzzy. Moreover, since
more than two modes can interact in meaning-making, the concepts of
anchoring and relay deserve to be expanded beyond word & image
relationships.
Most work hitherto procured on multimodality has been inspired by
Hallidayan
Systemic
Functional
Linguistics/Grammar
(SFL/SFG);
multimodality is only beginning to be discovered by scholars working within
cognitivist paradigms, specifically Conceptual Metaphor Theory (CMT). It
thus makes sense to first present a short overview of SFL/SFG-related
approaches (section 2), and then discuss CMT perspectives (section 3).
Section 4 presents some further thoughts and challenges for future
multimodality research.
2 Non-cognitivist approaches to multimodality
Kress and Van Leeuwen’s “social semiotics” approach is the best-known
paradigm in multimodality research. Its most widely quoted distinction is
that between the three “metafunctions” of communication: the ideational
(pertaining to the relation between representations and their referents);
the interpersonal (concerning how a sign is used communicatively between
sender and receiver); and the textual (governing the rules that ensure
coherence between the various textual elements) (2006: 42-43). In her
handbook, Jewitt (2014a) discusses two other major paradigms to
multimodality. The first is a “systemic functional grammar (SFG)
multimodal approach to discourse analysis” (Jewitt 2014b: 32); and the
second is an interactional approach transpiring in work by Sigrid Norris
(e.g., Norris 2012) and in “geosemiotics,” whose essence can be captured
by the view that “the sign only has meaning because of where it is placed
in the world” (Scollon and Wong Scollon 2003: 29, emphasis in original;
see also Scollon and Wong-Scollon 2014). The chapters in Jewitt’s
handbook cover a wide range of topics, including color effects in video and
film, the form of IKEA tables, pedagogical uses of multimodality, gesturing,
the layout of (online) newspapers and websites, music and soundscapes,
and the spatial design of museums.
Forceville – Multimodality 4
Despite multimodality’s popularity, the jury is still out on the thorny
issue of what kind of entity or phenomenon should be accorded “mode”
status. Several (hand)books on multimodality have been published (e.g.,
Kress and van Leeuwen 2001, 2006; Baldry and Thibault 2006; Royce and
Bowcher 2007; Ventola and Moya Guijarro 2009; Djonov and Zhao 2013;
Jewitt 2014a; Jewitt et al. 2016; Machin 2014; Archer and Breuer 2015),
but none of them comes up with an operationalizable definition; and they
usually downplay the need to do so. Kress and Van Leeuwen, for instance,
describe multimodality as “the use of several semiotic modes in the design
of a semiotic product or event” (2001: 20). Kress (2010) comments that
“instances of commonly used modes are speech; still image, moving image;
writing; gesture; music; 3D models; action; colour” (2010: 28, emphases
in original). Jewitt observes:
Within social semiotics, a mode, its organizing principles and resources,
is understood as an outcome of the cultural shaping of a material. The
resources come to display regularities through the ways in which people
use them. In other words in a specific context (time and place) modes
are shaped by the daily social interaction of people. It is these that
multimodal analysts call modes. […] What is considered a mode and
interaction between modes is inextricably shaped and construed by
social, cultural and historical factors (2014c: 22-23, emphasis in
original).
The “mode-status” problem is rooted in conflicting ideas about what
theoretical work a mode should be able to do. Social semiotics approaches
seem to take for granted that all aspects of meaning-generating discourse
need to be categorized as being, or belonging to a mode. But accepting,
say, visual elements such as color, size, angle, and lighting as modes runs
the risk of ending up with an unwieldingly long list that is moreover openended. Should we add gaze to the list of modes? Degree of realism?
Materiality of the medium? And why not accord mode status to timbre,
pitch, and loudness to the spoken-text mode? Or should we consider these
“submodes” of the spoken-text mode? What about page layout?
Interactivity?
Another characteristic of social semiotics approaches is that they
strongly urge addressing political and ideological problems in the real
world: “Social semiotics is interested in unveiling ideologies, social values,
power roles, and identities as expressed in texts, together with how
individuals actively maintain, reinforce, contest, and challenge them
through their sign-making choices” (Adami 2017: 455). While it is
undoubtedly important for humanities’ scholars no less than for (social)
scientists to ensure that society at large optimally benefits from their
insights, they need to be alert to the danger of achieving social relevance
at the expense of methodological precision (Forceville 1999; Bateman et
al. 2004).
Forceville – Multimodality 5
Although sharing some assumptions and ideas with the approaches
described above, more recently a body of work is gaining prominence that
tries to incorporate insights from both semiotics and cognitivism. Exponents
of this development are Bateman et al. (2017), Wildfeuer et al. (2019), and
Klug and Stöckl (2016). The case studies in Bateman et al. (2017) are
structured according to what the authors call “canvases” (rather than
media), grouped together depending on the affordances and constraints of
these canvases: (1) temporal, unscripted; (2) temporal, scripted; (3)
spatial, static; (4) spatial, dynamic; and (5) spatiotemporal, interactive.
Klug and Stöckl’s volume focuses on the role of language in multimodal
discourses in various genres. Wildfeuer et al. (2019) present contributions
analyzing a wide range of data with reference to the question whether, and
if so how, “multimodality” can/should develop into a discipline in its own
right.
3 Conceptual Metaphor Theory approaches to multimodality
The main reason that CMT studies of multimodality have not spawned much
discussion about the mode-problem is that hitherto they have by and large
investigated only two types of multimodal discourse: discourses combining
static visuals and written language; and spoken language accompanied by
gestures. The mode-status of visuals, written language, spoken language,
and gestures is relatively uncontested – but there has been little theoretical
scrutiny of the issue. What seems clear, though, is that unlike the
approaches briefly discussed in section 2, CMT-inspired work links mode as
closely as possible to sensory perception. From a theoretical perspective, it
would be convenient to postulate a one-to-one relationship between a
mode and a sensory organ. But unfortunately we do not only see images,
we also see written and signed language as well as gestures – and in Braille
language is felt. We hear spoken language as well as music and sounds.
Therefore a neat correspondence between mode and sensory perception is
untenable. Unable to come up with a satisfactory definition of mode,
Forceville (2006) nonetheless tries to optimally maintain the link between
mode and sensory perception, suggesting the following modes: (1) spoken
language; (2) written language; (3) visuals; (4) music; (5) sound; (6)
taste; (7) smell; (8) touch; (9) gestures. Admittedly, this proposal, too,
has drawbacks. For one thing, this is a bit of an odd list. For instance, my
reason to grant gestures mode-status was that in CMT the study of cospeech gesturing played a pioneering role in broadening scholarship beyond
an almost exclusive focus on conceptual metaphors’ verbal manifestations.
But, one can justifiably ask, if gestures are accorded mode-status, then
why not the facial expressions that, like gestures, often accompany and
reinforce emotional language? (McCloud 1993: Chapter 2; Stamenković et
al 2018). The same could be argued for body postures. In time-based
representations such as (animation) film, moreover, “manner of
movement” may convey vital information about a person’s or fictional
Forceville – Multimodality 6
protagonist’s character or mood. Consequently, I hereby tentatively
propose to replace Forceville’s (2006) gesture mode by a more inclusive
mode that could be labeled “bodily behavior,” and that comprises gestures,
postures, facial expressions, and (manner of) movement – these latter
possibly with “sub-mode” status. Apart from this modification, I still counsel
that the list of modes remain limited. This should be no problem if the
scholarly community is content to consider mode as one meaninggenerating aspect that always needs to be complemented by others. That
being said, even a relatively short list is subject to debate. It remains to be
decided, for instance, whether static images and moving images should be
considered as constituting one or rather two modes, as Kress (2010: 79)
advocates.
An advantage of a more or less finite list of modes would be that it helps
contrast multimodality with monomodality. A discourse exclusively
consisting of writing, or spoken language, or visuals, would thereby be
monomodal. Even if we accept that the written-language mode always has
visual features, too, these will be of less importance to linguists than to
graphic designers. Similarly, there are graphic novels whose creators
dispense almost completely with verbal text. Shaun Tan’s graphic novel The
Arrival, for instance is (virtually) monomodally visual. Cohn (2013, 2016)
pushes the communicative affordances of visuals as far as one can by
claiming there are entire visual languages. Lyric-less music would also
qualify as monomodal by the criteria of the cognitivist perspective on
modes sketched above.
The interest in multimodality among linguistics can largely be accounted
for by Lakoff and Johnson’s trail-blazing view that “metaphor is primarily a
matter of thought and action and only derivatively a matter of language”
(Lakoff and Johnson 1980: 153). Emphasizing that verbal metaphors are
actually manifestations of underlying conceptual metaphors, Lakoff and
Johnson pioneered CMT, and thereby paved the way for research projects
focusing on non-verbal and partly-verbal expressions of conceptual
metaphors. This initially led to two strands of multimodality in CMT
research: co-speech gesturing (e.g., Müller 2008; Cienki 2017; see also
McNeill 1992) and combinations of static visuals and written language (e.g.,
Forceville 1996; most contributions in Forceville and Urios-Aparisi 2009;
see also Bolognesi’s VISMET project at http://www.vismet.org/VisMet/).
But recent innovations in CMT expand the analysis of multimodality in
various directions. In the first place, more attention is paid to metonymy,
and to its interactions with metaphor (e.g., Mittelberg and Waugh 2009;
Peréz-Sobrino 2017; Kashanizadeh and Forceville 2020). Secondly, in the
written-verbal-cum-static-visual realm, other genres besides advertising
(e.g., Forceville 1996; Indurkhya and Ojha 2013) have begun to be
researched, such as political cartoons (e.g., El Refaie 2003; Teng 2009;
Schilperoord and Maes 2009; Bounegru and Forceville 2011; Negro
Alousque 2014; Domínguez 2015; Lin and Chiang 2015; Abdel-Raheem
2019; Forceville and Van de Laar 2019); comics and graphic novels
(Forceville 2005a; Szawerna 2017; El Refaie 2019). Metaphors used
Forceville – Multimodality 7
explicitly or implicitly in architectural design are investigated by Caballero
(2006) and Plowright (2017). Van Rompay (2005) and Cila (2013) theorize
metaphors in product design. Hidalgo-Downing and Kraljevic-Mujic (2020)
explore how multimodal metaphors and other tropes are deployed
creatively.
Moreover, some studies rooted in CMT have focused on discourses
involving the musical mode (e.g., Johnson and Larson 2003; Zbikowski
2009; Górska 2010). A more rapidly growing body of work straddles CMT
and cognitive film studies (e.g., Forceville 1999; Fahlenbrach 2016;
Hidalgo-Downing and Kraljevic-Mujic 2013; Rewiś-Łętkowska 2015;
Lankjær 2016; Coëgnarts and Kravanja 2012, 2015; Ortiz 2011, 2015;
Müller and Kappelhoff 2018; Greifenstein et al. 2018; see also Forceville
2018), whereas within film the medium-specific affordances of animation
are beginning to be explored (e.g., Forceville and Jeulink 2011;
Fahlenbrach 2017; Forceville and Paling 2018).
4 Other thoughts on multimodality and avenues for further
research
Since “multimodality” is still very much in the process of inventing itself,
let me end by sharing some ideas for research that may help its further
maturation.
Mode, medium, genre, rhetoric. If one sticks to the relatively short list of
modes presented above as cognitive-oriented, it is clear that while taste
and olfaction also generate meaning, these modes have hitherto received
little attention. Exceptions are work on wine-tasting notes by Caballero
(2009) and Hommerberg (2011), and Plümacher and Holz (2007), who
have dared to enter the field of olfaction. But it is virtually impossible to
discuss mode without simultaneously taking into account the medium in
which a discourse occurs. Of course medium is as knotty a concept as
mode. While it is often restricted to the physical, material dimension of a
discourse, a medium also evokes socio-culturally, institutionally, and
practically determined manners of use. Elleström (2010) exhorts analysts
to distinguish between two aspects of media: “the origin, delimitation and
use of media in specific historical, cultural and social circumstances,” which
he calls the “contextual qualifying aspect” (2010: 24), and its “aesthetic
and communicative characteristics … the operational qualifying aspect”
(2010: 25, emphases in original). Finally, meaning is strongly generated
and constrained by genre. Arguably, genre is the single most important
factor governing meaning-making in mass-communication (see Frow
2015). Some genres have been studied from a multimodal perspective,
such as experimental literature (e.g., Gibbons 2006), children’s literature
(e.g., Moya Guijarro 2014), murals (e.g, Poppi and Kravanja 2019; Asenjo
2018), Mayan inscription (e.g., Hamann 2018) – but many others hitherto
constitute unexplored territory. The threesome mode-medium-genre, in
Forceville – Multimodality 8
turn, requires analysis from the broader perspective of rhetorical goals that
any multimodal discourse aims to realize (Stöckl 2019/forthc.: 82).
Multimodal tropes. Cognitive multimodalists with an interest in rhetoric
would do well to broaden their interest in metaphor to other tropes. A good
project would be to revisit classical lists of tropes by Aristotle, Cicero, and
Quintilian to help re-define them on a conceptual level – as Lakoff and
Johnson (1980) did for metaphor. For metonymy such work is well under
way (e.g., Barcelona 2000; Dirven and Pörings 2002; Ruiz de Mendoza
2002; Littlemore 2015; Kövecses and Radden 1998), and proposals for
other tropes have been made by Gibbs (1993), while Burgers et al. (2016)
demonstrate how tropes can co-occur. Clearly the expertise of rhetoric and
argumentation scholars will be needed here as well. Thoughtful descriptions
and definitions of the verbal varieties of such conceptual tropes could be
the starting point for theorizing and analyzing them in other modes, and
multimodally. In the visual and multimodal realm, some tropes besides
metaphor and metonymy have been already been subject to examination
(e.g., Abed 1994; Teng and Sun 2002; Peréz-Sobrino 2017; Cornevin and
Forceville 2017; Tseronis and Forceville 2017, Poppi and Kravanja 2019).
Multimodality and interdisciplinarity. Irrespective of one’s theoretical
perspective on multimodality, analyzing multimodal discourse requires
expertise in at least two different modes, as expressed in a more or less
specific medium, and belonging to a more or less specific genre. For many
scholars this means that they have to acquire expertise in a new field of
study or discipline. It is an acceptable, even recommendable strategy to
take whatever one has learned in one’s own discipline about analyzing
discourses or data (which in linguistics are typically verbal) as a starting
point for approaching discourses and data in (an)other mode(s); but it is
crucial that one also learns how the data in that other mode have been
traditionally studied within their own discipline. In short, credibly studying
multimodality requires true interdisciplinarity, and this costs blood, sweat,
and tears.
Film as multimodal medium par excellence. Given that most post-silent
films recruit at least six of the modes suggested above (visuals, written
language, spoken language, non-verbal sound, music, and bodily
behavior), one would expect films to be in the centre of multimodality
research. But it is not. There are several reasons for this. In the first place,
with the exception of metaphor-oriented research, the disciplines of film
and multimodality have hitherto rarely crossed (but cf. Tseng 2013;
Bateman and Schmidt 2014). Secondly, film scholars have mainly focused
on the visual mode in their medium. Book-length studies on language in
film, for instance, are relatively rare in film scholarship (but cf. Kozloff
1998, 2000), as are monographs on music and sound in film and other
media (but cf. Cook 1998; Van Leeuwen 1999; Chion 2009; Mildorf and
Kinzel 2016). Thirdly, given that modes usually operate conjointly,
Forceville – Multimodality 9
accounting for meanings that may be triggered by as many as seven
different modes is no simple endeavor. Addressing this challenge clearly
requires both multimodality and film scholarship expertise.
Opportunities for translation studies. A field which increasingly touches on
multimodal issues is that of translation studies, specifically where the latter
focus on discourses in which one or modes need to be communicated in
another mode, as in subtitling and in audio description (AD) of paintings
and films for blind people (e.g., Tuominen et al. 2018; Taylor 2019). The
decision to expand or reduce certain elements in subtitles vis-à-vis the
verbal source text may be influenced by the presence or absence of
contextual information in other modes (visuals, sound, music). In AD,
furthermore, other modes, such as touch can be recruited (e.g., feeling
sculptures or relief-versions of paintings). It is to be noted that dubbing
(although strictly speaking pertaining to one mode, namely spoken
language), too, has multimodal implications, since dubbing professionals
need to aim for simulating (visually accessed) “lip-synchronicity.”
Tools for qualitative and quantitative multimodality research. How does one
model multimodal discourses so as to understand which modes partake in
meaning-making, and how? Blending Theory/Conceptual Integration
Theory (e.g., Fauconnier and Turner 2002) can help visualize how
information in different modes is relayed (to use Barthes’ term): each
pertinent mode can fill an “input space” and the relevant meanings of these
input spaces are then integrated in the “blended space” (for an application,
see Forceville 2013). A bigger challenge is the development of tools that
enable systematical annotations, and thus make compatible, information in
different modes, both within a single discourse and across discourses. This
is particularly difficult if a discourse features modes other than language,
because annotating tends to strongly favor the written mode, thereby
forcing analysts to present their findings in artificial, laborious, and
sometimes counter-intuitive formats (e.g., “translating” visuals, gestures,
and/or music into written language). Moreover, motivated decisions must
be taken as to what constitutes a good unit of analysis in a given mode.
Robust annotation systems are also crucial for corpus research involving
non-verbal modes. Several software tools have been developed for this
purpose. One such a (free) tool is Kaleidographic (Caple et al. 2018; see
http://www.kaleidographic.org/); another, specifically developed for film,
is Yuri Tsivian’s Cinemetrics (http://www.cinemetrics.lv/). Promoters of the
“Red Hen” project encourage scholars to use ELAN and share their
annotations on the Red Hen website (e.g., Steen et al. 2018; see
http://www.redhenlab.org/). A relatively young strand within multimodality
research is testing people’s eye movements when they look at visual or
multimodal representations (e.g., Holšánová 2008, 2014).
Publication venues. There are more and more opportunities for publishing
about multimodality. Most major publishers nowadays are likely to accept
Forceville – Multimodality 10
books that have multimodal content, but Routledge has a specific series
devoted to it. In 2012 the journal Multimodal Communication was founded,
whereas Visual Communication (since 2002) regularly features papers in
this field – unsurprisingly, as discourses that are monomodally visual are
much more rare than discourses in which visual information interacts with
information in other modes. Metaphor and Symbol (since 1986) and
Metaphor and the Social World (since 2011) regularly sport multimodal
articles, and even the traditionally language-oriented Journal of Pragmatics
(since 1977) nowadays carries multimodal studies. The art-oriented Word
& Image (since 1985) also deserves to be mentioned. An added difficulty
for all forms of publication is that, given the dominance of printed formats,
only the written-verbal and the static-visual mode can be more or less
unproblematically reproduced. It is to be expected that academic publishing
will in the future further facilitate the inclusion of original multimodal
material (specifically: audiovisual fragments) via links to journals’ URLs
and/or authors’ websites.
The need for an inclusive communication model. The manifold dimensions
of multimodality outlined in this chapter may seem discouraging – whereas
the influence of (sub)cultural identities on the interpretation of multimodal
discourses has not even been addressed (see e.g., Ibarretxe-Antuñano
2013; Adami 2015). It is important to bear in mind that, of course, wellformulated research questions usefully limit which dimensions need to be
taken into account. It is furthermore vital that multimodalists take
cognizance of more than one model, understand the strengths and
weaknesses of each of them, and see through model-specific terms and
categories that may hide underlying, data-oriented similarities. The
possible confluence of insights emanating from different models would be
very much helped by the existence of an all-encompassing communication
theory. Arguably, the contours of such a theory exist in the form of
relevance theory/RT (e.g., Sperber and Wilson 1995; Wilson and Sperber
2012; Clark 2013). RT has been developed from Grice’s maxims of
communication (e.g., Grice 1975), and is ultimately rooted in Darwinian
principles of survival and reproduction. Its central tenet is that every act of
communication comes with the presumption of optimal relevance to its
envisaged addressee(s) (Sperber and Wilson 1995: 155-163). “Classic” RT
almost exclusively pertains to spoken communication between two persons
talking to each other in a live situation. In order to be useful for multimodal
scholarship, RT needs to be expanded and refined so as to be capable of
accommodating communication that is non-verbal and multimodal and
addresses large audiences. The beginnings of this project are there
(Forceville 1996: Chapter 5, 2005b, 2014, 2020; Forceville and Clark 2014;
Wharton 2009; Wharton and Strey 2019; Origgi 2013; Yus 2011, 2014,
2016).
Forceville – Multimodality 11
5 Further reading
Bateman, John, Janina Wildfeuer, and Tuomo Hiippala (2017).
Multimodality: Foundations, Research and Analysis – A Problem-Oriented
Introduction. Berlin: De Gruyter Mouton. The most balanced textbook on
multimodality hitherto written.
Forceville, Charles, and Eduardo Urios-Aparisi, eds (2009). Multimodal
Metaphor. Berlin: Mouton de Gruyter. Presents the first ventures into
multimodality in CMT.
Jewitt, Carey, ed. (2014). The Routledge Handbook of Multimodal Analysis.
2nd edn. London: Routledge. Showcases a wide range of social semiotics
and SFL-oriented studies in multimodality.
Klug, Nina-Maria, and Hartmut Stöckl, eds (2016). Handbuch Sprache im
multimodalen Kontext/The Language in Multimodal Contexts Handbook.
Berlin: De Gruyter Mouton. State of the art of semiotics-cum-cognitive
analyses on discourses combining the verbal with other modes.
Kress, Gunther, and Theo van Leeuwen (2006). Reading Images: The
Grammar of Visual Design. 2nd edn. London: Routledge. The most
influential monograph in the field. Idea-rich but methodologically
problematic.
6 Related topics
“Cognitive semantics” (Dirk Geeraerts);“Mental space and conceptual
integration theory” (Anders Hougaard); “Usage-based models of language”
(Michael Tomasello); “Conceptual metaphor” (Zoltán Kövecses);
“Conceptual metonymy”
(Jeannette
Littlemore); “Concepts and
conceptualization” (Xu Wen); “Cognitive pragmatics” (Bruno G. Bara);
“Cognitive poetics/stylistics” (Mark Turner); “Cognitive discourse analysis”
(Ulrike Schröder); “Cognitive linguistics and cultural studies” (Chris Sinha);
“Cognitive linguistics and translation studies”(Isabel Lacruz); “Cognitive
linguistics and rhetoric” (Herbert L. Colston); “Cognitive linguistics and
biolinguistics” (Kleanthes K. Grohmann); “Cognitive linguistics and
semiotics (gesture and sign language)” (Alan Cienki); “Cognitive linguistics
and philosophy” (Mark Johnson); “Cognitive sociolinguistics” (Gitte
Kristiansen).
7 References
Abdel-Raheem, Ahmed (2019). Pictorial Framing in Moral Politics: A Corpus-Based Experimental
Study. London: Routledge.
Abed, Farough (1994). “Visual puns as interactive illustrations: Their effects on recognition
memory.” Metaphor and Symbolic Activity 9: 45-60. Doi: 10.1207/s15327868ms0901_3
Adami, Elisabetta (2015). “Aesthetics in digital texts beyond writing: A social semiotic multimodal
framework.” In: Arlene Archer and Esther Breuer (eds), Multimodality in Writing: The State of the
Art in Theory, Methodology and Pedagogy (43-62). Leiden: Brill. Doi: 10.1163/9789004297197_004
Forceville – Multimodality 12
Adami, Elisabetta (2017). “Multimodality.” In: Ofelia García, Nelson Flores, and Massimiliano Spotti
(eds), The Oxford Handbook of Language and Society (451-472). Oxford: Oxford University Press.
Doi: 10.1093/oxfordhb/9780190212896.013.23
Archer, Arlene, and Esther Breuer, eds (2015). Multimodality in Writing: The State of the Art in
Theory, Methodology and Pedagogy. Leiden: Brill. Doi: 10.1163/9789004297197
Asenjo, Roberto (2018). “The influence of culture in the multimodal murals of Northern Ireland.”
Warsaw Multimodality Workshop and Masterclass, 7-9 June 2018, University of Warsaw, Poland.
Baldry, Anthony, and Paul J. Thibault (2006). Multimodal Transcription and Text Analysis: A
Multimedia Toolkit and Coursebook. London: Equinox.
Barcelona, Antonio, ed., (2000). Metaphor and Metonymy at the Crossroads: A Cognitive Perspective.
Berlin: Mouton de Gruyter.
Bateman, John (2008). Multimodality and Genre: A Foundation for the Systematic Analysis of
Multimodal Documents. Basingstoke: Palgrave Macmillan. Doi: 10.1057/9780230582323
Bateman, John (2014). Text and Image: A Critical Introduction to the Visual/Verbal Divide. London:
Routledge.
Bateman, John, Judy Delin, and Renate Henschel (2004). “Multimodality and empiricism: Preparing
for a corpus-based approach to the study of multimodal meaning-making.” In: Eija Ventola,
Cassily Charles, and Martin Kaltenbacher (eds), Perspectives on Multimodality (65-87).
Amsterdam: Benjamins. Doi: 10.1075/ddcs.6.06bat
Bateman, John, and Karl-Heinrich Schmidt (2014). Multimodal Film Analysis: How Films Mean.
London: Routledge.
Bateman, John, Janina Wildfeuer, and Tuomo Hiippala (2017). Multimodality: Foundations, Research
and Analysis – A Problem-Oriented Introduction. Berlin: De Gruyter Mouton.
Bounegru, Liliana, and Charles Forceville (2011). “Metaphors in editorial cartoons representing the
global financial crisis.” Visual Communication 10: 209-229. Doi: 10.1177/1470357211398446
Burgers, Christiaan, Elly Konijn, and Gerard Steen (2016). “Figurative framing: Shaping public
discourse through metaphor, hyperbole, and irony.” Communication Theory 26: 410–430. Doi:
10.1111/comt.12096
Caballero, Rosario (2006). Re-viewing Space: Figurative Language in Architects’ Assessment of Built
Space. Berlin: Mouton de Gruyter 2006. Doi: 10.1080/10926480701357687
Caballero, Rosario (2009). “Cutting across the senses: imagery in winespeak and audiovisual
promotion.” In: Forceville and Urios-Aparisi (eds), Multimodal Metaphor (73-94). Doi:
10.1515/9783110215366.2.73
Caple, Helen, Monika Bednarek, and Laurence Anthony (2018). “Using Kaleidographic to visualize
multimodal relations within and across texts.” Visual Communication 17(4): 461-474. Doi:
10.1177/1470357218789287
Chandler, Daniel (2017). Semiotics: The Basics (3rd edn.). London: Routledge.
Chion, Michel (2009). Film, a Sound Art. Transl. by Claudia Gorbman. New York: Columbia University
Press.
Cienki, Alan (2017). Ten Lectures on Spoken Language and Gesture from the Perspective of Cognitive
Linguistics: Issues of Dynamicity and Multimodality. Leiden: Brill. Doi: 10.1163/9789004336230
Cila, Nazli (2013). Metaphors We Design By: The Use of Metaphors in Product Design. PhD thesis
Technical University Delft, NL. ISBN: 978-94-6191-890-1.
Clark, Billy (2013). Relevance Theory. Cambridge: Cambridge University Press. Doi:
10.1017/CBO9781139034104
Coëgnarts, Maarten, and Peter Kravanja (2012). “From thought to modality: A theoretical framework
for analysing structural-conceptual metaphor and image metaphor in film.” Image [&] Narrative
13(1): 96-113.
Coëgnarts, Maarten, and Peter Kravanja, eds (2015). Embodied Cognition and Cinema. Leuven:
Leuven University Press.
Forceville – Multimodality 13
Cohn, Neil (2013). The Visual Language of Comics: Introduction to the Structure and Cognition of
Sequential Images. London: Bloomsbury.
Cohn, Neil, ed. (2016). The Visual Narrative Reader. London: Bloomsbury.
Cook, Nicholas (1998). Analysing Musical Multimedia. Oxford: Clarendon.
Cornevin, Vanessa, and Charles Forceville (2017). “From metaphor to allegory: The Japanese manga
Afuganisu-tan.” Metaphor and the Social World 7(2): 236-252. Doi: 10.1075/msw.7.2.04cor
Djonov, Emilia, and Sumin Zhao, eds (2013), Critical Multimodal Studies of Popular Culture. New
York: Routledge.
Dirven, René, and Ralf Pörings, eds (2002). Metaphor and Metonymy in Comparison and Contrast.
Berlin: Mouton de Gruyter.
Domínguez, Martí (2015). “The metaphorical species: Evolution, adaptation and speciation of
metaphors.” Discourse Studies 17(4): 433-448.
El Refaie, Elisabeth (2003). “Understanding visual metaphors: The example of newspaper cartoons.”
Visual Communication 2(1): 75-95. Doi: 10.1177/1470357203002001755
El Refaie, Elisabeth (2019). Visual Metaphor and Embodiment in Graphic Illness Narratives. Oxford:
Oxford University Press.
Elleström, Lars (2010). “Introduction.” In: Lars Elleström (ed.), Media Borders, Multimodality and
Intermediality (1-48). Basingstoke: Palgrave MacMillan.
Fahlenbrach, Kathrin, ed. (2016). Embodied Metaphors in Film, Television, and Video Games. London:
Routledge.
Fahlenbrach, Kathrin (2017). “Audiovisual metaphors and metonymies of emotions and depression
in moving images.” In: Ervas Francesca, Elisabetta Gola, and Maria Grazia Rossi (eds), Metaphor in
Communication, Science and Education (95-117). Berlin: De Gruyter Mouton. Doi:
10.1515/9783110549928-006
Fauconnier, Gilles, and Mark Turner (2002). The Way We Think: Conceptual Blending and the Mind’s
Hidden Complexities. New York: Basic Books.
Forceville, Charles (1996). Pictorial Metaphor in Advertising. London: Routledge.
Forceville, Charles (1999). “Educating the eye? Kress and Van Leeuwen's Reading Images: The
Grammar of Visual Design (1996).” Language and Literature 8(2): 163-178. Doi:
10.1177/096394709900800204
Forceville, Charles (2005a). “Visual representations of the Idealized Cognitive Model of anger in the
Asterix album La Zizanie.” Journal of Pragmatics 37: 69-88. Doi: 10.1016/j.pragma.2003.10.002
Forceville, Charles (2005b). “Addressing an audience: Time, place, and genre in Peter van Straaten’s
calendar cartoons.” Humor: International Journal of Humor Research 18(3): 247-278. Doi:
10.1515/humr.2005.18.3.247
Forceville, Charles (2006). “Non-verbal and multimodal metaphor in a cognitivist framework:
Agendas for research.” In: Gitte Kristiansen, Michel Achard, René Dirven, and Francisco Ruiz de
Mendoza (eds.), Cognitive Linguistics: Current Applications and Future Perspectives (379-402).
Berlin: Mouton de Gruyter.
Forceville, Charles (2013). “Creative visual duality in comics balloons.” In: Tony Veale, Kurt Feyaerts,
and Charles Forceville (eds), Creativity and the Agile Mind: A Multi-Disciplinary Exploration of a
Multi-Faceted Phenomenon (253-273). Berlin: De Gruyter Mouton.
Forceville Charles (2014). “Relevance Theory as model for analysing multimodal communication.” In:
David Machin (ed.), Visual Communication (51-70). Berlin: De Gruyter Mouton.
Forceville, Charles (2018). “Multimodality, film, and cinematic metaphor: An evaluation of Müller
and Kappelhoff (2018).” Punctum: International Journal of Semiotics (Greece) 4(2): 90-108. DOI:
10.18680/hss.2018.0021
Forceville, Charles (2020). Visual and Multimodal Mass-Communication: Applying the Relevance
Principle. Oxford University Press.
Forceville, Charles, and Eduardo Urios-Aparisi, eds (2009). Multimodal Metaphor. Berlin: Mouton de
Gruyter.
Forceville – Multimodality 14
Forceville, Charles, and Billy Clark (2014). “Can pictures have explicatures?” Linguagem em
(Dis)curso 14(3): 451-472.
Forceville, Charles, and Eduardo Urios-Aparisi, eds (2009). Multimodal Metaphor. Berlin: Mouton de
Gruyter.
Forceville, Charles, and Marloes Jeulink (2011). “The flesh and blood of embodied understanding:
The source-path-goal schema in animation film.” Pragmatics & Cognition 19(1): 37-59. Doi:
10.1075/pc.19.1.02for
Forceville, Charles, and Sissy Paling (2018). “The metaphorical representation of DEPRESSION in short,
wordless animation films.” Visual Communication (ahead of print, via open access at
http://journals.sagepub.com/doi/10.1177/1470357218797994).
Forceville, Charles, and Nataša van de Laar (2019). “Metaphors portraying right-wing politician Geert
Wilders in Dutch political cartoons.” In: Encarnación Hidalgo Tenorio, Miguel Ángel Benitez
Castro, and Francesca De Cesare (eds), Populist Discourse: Critical Approaches to Contemporary
Politics (292-307). London: Routledge.
Frow, John (2015). Genre (2nd edn.). London: Routledge.
Gibbons, Alison (2012). Multimodality, Cognition, and Experimental Literature. London: Routledge.
Gibbs, Raymond W., Jr (1993). “Process and products in making sense of tropes.” In: Andrew Ortony
(ed.), Metaphor and Thought, 2nd edn. (252-176). Cambridge: Cambridge University Press. Doi:
10.1017/CBO9781139173865.014
Górska, Elżbieta (2010). “Life is music: A case study of a novel metaphor and its use in discourse.”
English Text Construction 3(2): 275-293. Doi: 10.1075/bct.40.08gor
Greifenstein, Sarah, Dorothea Horst, Thomas Scherer, Christina Schmitt, Hermann Kappelhoff, and
Cornelia Müller, eds (2018). Cinematic Metaphor in Perspective: Reflections on a Transdisciplinary
Framework. Berlin: De Gruyter Mouton.
Grice (1975). “Logic and conversation.” In: Peter Cole and Jerry L. Morgan (eds), Syntax and
Semantics 3: Speech Acts (41-58). New York: Academic Press.
Hamann, Agnieszka (2018). “Multimodal meaning-making in Classic Maya inscriptions.” LaMiCuS 2:
132-146. Doi: 10.32058/LAMICUS-2018-005
Hidalgo-Downing, Laura, and Blanca Kraljevic-Mujic, eds (2013). “Metaphorical creativity across
modes.” Special issue of Metaphor and the Social World 3(2). Doi: 10.1075/msw.3.2.01int
Hidalgo-Downing, Laura, and Blanca Kraljevic-Mujic, eds (2020). Performing Metaphoric Creativity
across Modes and Contexts. Berlin: De Gruyter Mouton.
Hiippala, Tuomo (2015). The Structure of Multimodal Documents: An Empirical Approach. London:
Routledge.
Holšánová, Jana (2008). Discourse, Vision, and Cognition. Amsterdam: Benjamins.
Holšánová, Jana (2014). “In the eye of the beholder: Visual communication from a recipient
perspective.” In: David Machin (ed.), Visual Communication (331-355). Berlin: De Gruyter
Mouton.
Hommerberg, Charlotte (2011). Persuasiveness in the Discourse of Wine: The Rhetoric of Robert
Parker. Linnaeus University Dissertations no 71/2011. ISBN: 978-91-86983-18-5
Ibarretxe-Antuñano, Iraide (2013). “The relationship between conceptual metaphor and culture.”
Intercultural Pragmatics 10(2): 315-339.
Indurkhya, Bipin, and Amitash Ojha (2013). “An empirical study on the role of perceptual similarity in
visual metaphors and creativity.” Metaphor and Symbol 28: 233-253. Doi:
10.1080/10926488.2013.826554
Jewitt, Carey, ed. (2014a). The Routledge Handbook of Multimodal Analysis (2nd edn.). London:
Routledge.
Jewitt, Carey (2014b). “Different approaches to multimodality.” In: Carey Jewitt (ed.), The Routledge
Handbook of Multimodal Analysis (31-43). London: Routledge.
Jewitt, Carey (2014c). “An introduction to multimodality.” In: Carey Jewitt (ed.), The Routledge
Handbook of Multimodal Analysis (15-30). London: Routledge.
Forceville – Multimodality 15
Jewitt, Carey, Jeff Bezemer, and Kay O’ Halloran (2016). Introducing Multimodality. London:
Routledge.
Johnson, Mark, and Steve Larson (2003). “’Something in the way she moves’ – metaphors of musical
motion.” Metaphor and Symbol 18: 63-84. Doi: 10.1207/S15327868MS1802_1
Kashanizadeh, Zahra, and Charles Forceville (2020). “Visual and multimodal interaction of metaphor
and metonymy in print advertising.” Cognitive Linguistics Studies 7(1): 78-110.
Klug, Nina-Maria, and Hartmut Stöckl, eds (2016). Handbuch Sprache im multimodalen Kontext/The
Language in Multimodal Contexts Handbook. Berlin: De Gruyter Mouton.
Kövecses, Zoltán, and Günther Radden (1998). “Metonymy: Developing a cognitive linguistic view.”
Cognitive Linguistics 9(1): 37-78. Doi: 10.1515/cogl.1998.9.1.37
Kozloff, Sarah (1988). Invisible Storytellers: Voice-Over Narration in American Fiction Film. Berkeley:
University of California Press.
Kozloff, Sarah (2000). Overhearing Film Dialogue. Berkeley: University of California Press.
Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication.
London: Routledge.
Kress, Gunther, and Theo van Leeuwen (2006). Reading Images: The Grammar of Visual Design (2nd
edn.). London: Routledge.
Kress, Gunther, and Theo van Leeuwen (2001). Multimodal Discourse. London: Arnold.
Lakoff, George, and Mark Johnson (1980). Metaphors We Live By. Chicago: University of Chicago
Press.
Lankjær, Birger (2016). “Problems of metaphor, film, and visual perception.” In: Kathrin Fahlenbrach
(ed.), Embodied Metaphors in Film, Television, and Video Games (115-128). London: Routledge.
Lin, Tiffany Ying-Yu, and Wen-Yu Chiang (2015). “Multimodal fusion in analyzing political cartoons:
Debates on U.S. beef imports into Taiwan.” Metaphor and Symbol 30: 137-161. Doi:
10.1080/10926488.2015.1016859
Littlemore, J. (2015). Metonymy: Hidden Shortcuts in Language, Thought and Communication.
Cambridge: Cambridge University Press.
Machin, David, ed. (2014), Visual Communication. Berlin: De Gruyter Mouton.
McCloud, Scott (1993). Understanding Comics. New York: Paradox Press.
McNeill, David (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago: University of
Chicago Press.
Mildorf, Jarmila, and Till Kinzel, eds (2016). Audionarratology: Interfaces of Sound and Narrative.
Berlin: De Gruyter Mouton.
Mittelberg, Irene, and Linda R. Waugh (2009). “Metonymy first, metaphor second: A cognitivesemiotic approach to multimodal figures of thought in co-speech gesture.” In: Forceville and
Urios-Aparisi (eds.), Multimodal Metaphor (329-356).
Müller, Cornelia (2008). Metaphors Dead and Alive, Sleeping and Waking: A Dynamic View. Chicago:
University of Chicago Press.
Müller, Cornelia, and Hermann Kappelhoff (2018). Cinematic Metaphor: Experience – Affectivity –
Temporality. Berlin: De Gruyter Mouton.
Negro Alousque, Isabel (2014). “Pictorial and verbo-pictorial metaphor in Spanish political
cartooning.” Círculo de Lingüística Aplicada a la Comunicación 57, 59-84. Doi:
10.5209/rev_CLAC.2014.v57.44515
Norris, Sigrid, ed. (2012). Multimodality in Practice: Investigating Theory-in-Practice through
Methodology. London: Routledge.
Origgi, Gloria (2013). “Democracy and trust in the age of the Social Web.” Teoria Politica. Nova Serie,
Annali II: 23-38.
Ortiz, María J. (2011). “Primary metaphors and monomodal visual metaphors.” Journal of
Pragmatics 43: 1568-1580. Doi: 10.1016/j.pragma.2010.12.003
Forceville – Multimodality 16
Ortiz, María J. (2015). “Films and embodied metaphors of emotion.” In: Maarten Coëgnarts and
Peter Kravanja (eds), Embodied Cognition and Cinema (203-220). Leuven: Leuven University
Press.
Parrill, Fey (2012). “Interactions between discourse status and viewpoint in co-speech gesture.” In:
Barbara Dancygier and Eve Sweetser (eds), Viewpoint in Language: A Multimodal Perspective (97112). Cambridge: Cambridge University Press. Doi: 10.1017/CBO9781139084727.008
Pérez-Sobrino, Paula (2017). Multimodal Metaphor and Metonymy in Advertising. Amsterdam:
Benjamins.
Pinár-Sanz, Maria Jesús, ed. (2013). “Multimodality and cognitive linguistics,” special issue of Review
of Cognitive Linguistics 11(2). Doi: 10.1075/bct.78
Plowright, Philip (2017). Qualititative Embodiment in Architectural Discourse: Conceptual Metaphors
and the Value Judgement of Space. PhD thesis, Facultad de Letras Universidad de Castilla-La
Mancha Ciudad Real, Spain.
Plümacher, Martina, and Peter Holz, eds (2007). Speaking of Colors and Odors. Amsterdam:
Benjamins.
Poppi, Fabio Indio Massimo, and Peter Kravanja (2019). ”Actiones secundum fidei: Antithesis and
metaphoric conceptualization in Banksy’s graffiti art.” Metaphor and the Social World 9(1): 84107. Doi: 10.1075/msw.17021.pop
Rewiś-Łętkowska, Anna (2015). “Multimodal representations of fear metaphors in television
commercials.” In: Dorota Brzozowska and Władysław Chłopicki (eds), Culture’s Software:
Communication Styles (381-404). Newcastle upon Tyne: Cambridge Scholars.
Royce, Terry, and Wendy L. Bowcher, eds (2007). New Directions in the Analysis of Multimodal
Discourse. Mahwah: Lawrence Erlbaum.
Ruiz de Mendoza, Francisco (2000). “The role of mappings and domains in understanding
metonymy.” In: Antonio Barcelona (ed.), Metaphor and Metonymy at the Crossroads (109-132).
Berlin: Mouton de Gruyter. Doi: 10.1515/9783110894677.109
Schilperoord, Joost, and Alfons Maes (2009). “Visual metaphoric conceptualization in editorial
cartoons.” In: Forceville and Urios-Aparisi (eds), Multimodal Metaphor (213-240).
Scollon, Ron, and Suzy Wong Scollon (2003). Discourses in Place: Language in the Material World.
London: Routledge.
Scollon, Ron, and Suzie Wong Scollon (2014). “Multimodality and language: A retrospective and
prospective view.” In: Carey Jewitt (ed.), The Routledge Handbook of Multimodal Analysis (205216). London: Routledge.
Sperber, Dan and Deirdre Wilson (1995). Relevance: Communication and Cognition (2nd edn.). Oxford:
Blackwell.
Stamenković, Dušan, Miloš Tasić, and Charles Forceville (2018). “Facial expressions in comics: An
empirical consideration of McCloud’s proposal.” Visual Communication 17(4): 407–432. Doi:
10.1177/1470357218784075
Steen, Francis, F., Anders Hougaard, Jungseock Joo, Inés Olza, Cristóbal Pagán Cánovas, Anna
Pleshakova, Soumya Ray, Peter Uhrig, Javier Valenzuela, Jacek Woźny, and Mark Turner (2018).
“Toward an infrastructure for data-driven multimodal communication research.” LaMiCuS 2: 208222.
Stöckl, Hartmut (2014a). “Typography.” In: Sigrid Norris and Carmen Daniela Maier
(eds), Interactions, Images and Texts. A Reader in Multimodality (281-294). Berlin: de Gruyter
Mouton.
Stöckl, Hartmut (2014b). “Semiotic paradigms and multimodality.” In: Carey Jewitt (ed.), The
Routledge Handbook of Multimodal Analysis (2nd edn.) (274-286). London: Routledge.
Stöckl, Hartmut (2019). “Linguistic multimodality – multimodal linguistics: a state-of-the-art sketch.”
In: Wildfeuer et al. (eds), Multimodality: Disciplinary Thoughts and the Challenge of Diversity (4168).
Forceville – Multimodality 17
Szawerna, Michał (2017). Metaphoricity of Conventionalized Diegetic Images in Comics: A Study in
Multimodal Cognitive Linguistics. Frankfurt am Main: Peter Lang.
Taylor, Christopher (2019). “Audio description: A multimodal practice in expansion.” In: Wildfeuer et
al. (eds), Multimodality: Disciplinary Thoughts and the Challenge of Diversity (195-218).
Teng, Norman Y. (2009). “Image alignment in multimodal metaphor.” In: Forceville and Eduardo
Urios-Aparisi (eds), Multimodal Metaphor (197-211). .
Teng, Norman Y., and Sewen Sun (2002). “Grouping, simile, and oxymoron in pictures: A designbased cognitive approach.” Metaphor and Symbol 17: 295-316. Doi:
10.1207/S15327868MS1704_3
Tseng, Chiao-i (2013). Cohesion in Film: Tracking Film Elements. Basingstoke: Palgrave Macmillan.
Tseronis, Assimakis, and Charles Forceville (2017). “The argumentative relevance of visual and
multimodal antithesis in Frederick Wiseman’s documentaries.” In: Assimakis Tseronis and Charles
Forceville (eds), Multimodal Argumentation and Rhetoric in Media Genres (165-188). Amsterdam:
Benjamins. DOI: 10.1075/aic.14.07tse
Tuominen, Tiina, Catalina Jiménez Hurtado, and Anne Ketola (2018). “Why methods matter:
Approaching multimodality in translation research.” Linguistica Antverpiensia, New Series:
Themes in Translation Studies 17: 1–21.
Unsworth, Len, and Chris Clérigh (2009). “Multimodality and reading: The construction of meaning
through image-text interaction.” In: Carey Jewitt (ed.), The Routledge Handbook of Multimodal
Analysis (2nd edn.) (151-163). London: Routledge.
Van Leeuwen, Theo (1999). Speech, Music, Sound. Basingstoke: Macmillan.
Van Leeuwen, Theo (2006). “Towards a semiotics of typography.” Information Design Journal 14(2):
139-155. Doi: 10.1075/idj.14.2.06lee
Van Leeuwen, Theo, and Carey Jewitt, eds (2001). Handbook of Visual Analysis. London: Sage.
Van Rompay, Thomas (2005). Expressions: Embodiment in the Experience of Design. PhD thesis,
Technical University Delft, NL. ISBN: 90-9019316-2
Ventola, Eija, and Arsenio Jesús Moya Guijarro, eds (2009). The World Told and the World Shown:
Multisemiotic Issues. Basingstoke: Palgrave Macmillan.
Wharton, Tim (2009). Pragmatics and Non-Verbal Communication. Cambridge: Cambridge University
Press.
Wharton, Tim, and Claudia Strey (2019). “Slave of the passions: Making emotions relevant.” In: Kate
Scott, Billy Clark, and Robyn Carston (eds), Relevance, Pragmatics and Interpretation (253-266).
Cambridge: Cambridge University Press.
Wildfeuer, Janina, Jana Pflaeging, John Bateman, Chiao-I Tseng, and Ognyan Seizov, eds (2019),
Multimodality: Disciplinary Thoughts and the Challenge of Diversity. Berlin: De Gruyter Mouton.
Wilson, Deirdre, and Dan Sperber, eds (2012). Meaning and Relevance. Cambridge: Cambridge
University Press.
Yus, Francisco (2011). Cyberpragmatics: Internet-mediated Communication in Context. Amsterdam:
Benjamins.
Yus, Francisco (2014). “Not all emoticons are created equal.” Linguagem em (Dis)curso 14(3): 511529.
Yus, Francisco (2016). Humour and Relevance. Benjamins, Amsterdam.
Zbikowski, Lawrence (2009). “Music, language, and multimodal metaphor.” In: Forceville and UriosAparisi (eds), Multimodal Metaphor (359-381).
Bioline
Charles Forceville works in Media Studies, University of Amsterdam. His key research question is
how visuals convey meaning, alone or in combination with other modes. A cognitivist, he publishes
on multimodal narration and rhetoric in documentary, animation, advertising, comics, and cartoons.