Papers by Mihaela Popa-Wyatt
Philosophical Studies, 2017
In the original publication of the article, the Acknowledgement section was incorrectly published... more In the original publication of the article, the Acknowledgement section was incorrectly published. The corrected version is given below.
Toronto Working Papers in Linguistics, Nov 19, 2011
This paper examines the mechanisms involved in the interpretation of utterances that are both met... more This paper examines the mechanisms involved in the interpretation of utterances that are both metaphorical and ironical. For example, when uttering ‗He's a real numbercruncher' about a total illiterate in maths, the speaker uses a metaphor with an ironic intent. I argue that in such cases both logically and psychologically, the metaphor is prior to irony. I hold that the phenomenon is then one of ironic metaphor, which puts a metaphorical meaning to ironic use, rather than an irony used metaphorically (§1). This result is then used to argue for the claim that in metaphor, it is metaphorical, not literal, meaning that determines the utterance's truth conditions. Gricean accounts, which exclude metaphorical meaning from truth conditional content and rely entirely on conversational implicature, are seen to be unsatisfactory. Five contextualist arguments are briefly discussed to the conclusion that metaphorical content is part of truth-conditional content, rather than implicated (§2). * I thank Philip Percival for helpful comments. 1 Katz & Pexman (1997), comparing the preferred use of metaphor or irony with respect to the speaker's occupation, found that, for example, cab/truck drivers, students, political critics, and mechanics are more likely to use irony, whereas salesmen, scientists, lawyers, and cooks are viewed as using metaphor more often than irony.
Policy@Manchester, 2023
Hate speech or harmful speech is any expression (speech, text, images) that demeans, threatens, o... more Hate speech or harmful speech is any expression (speech, text, images) that demeans, threatens, or harms members of groups with protected characteristics. It includes slurs, name-calling, discriminatory and exclusionary speech, incitement to hatred and violence, harassment. Online communities are a particularly fast way to spread hate. In this article, Dr Mihaela Popa-Wyatt explores the main questions regulators and policymakers must address, including the rights and protections to be balanced, and questions of practical enforcement.
Journal of Applied Philosophy, 2023
Our time is marked by a resurgence of hate that threatens to increase oppression. Social media ha... more Our time is marked by a resurgence of hate that threatens to increase oppression. Social media has contributed to this by acting as a medium through which hate speech is spread. How should we model the spread of hate? This article considers two models. First, I consider a simple contagion model. In this model, hate spreads like a virus through a social network. This model, however, fails to capture the fact that people do not acquire hatred from a single infectious contact. Instead, it builds up in a person's beliefs and attitudes through time until the infection reaches a level where the subject themselves becomes a generator of hate speech. Second, to accommodate this, I consider an alternative model known as complex contagion. I argue that not only is a complex contagion model more explanatory and predictive, but it can be used to explain why certain features of social media cause it to be a promoter of hate. I conclude by sketching some mitigation strategies.
Topoi-an International Review of Philosophy, 2021
Philosophical views of language have traditionally been focused on notions of truth. This is a re... more Philosophical views of language have traditionally been focused on notions of truth. This is a reconstructive view in that we try to extract from an utterance in context what the sentence and speaker meaning are. This focus on meaning extraction from word sequences alone, however, is challenged by utterances which combine different types of figures. This paper argues that what appears to be a special case of ironic utterances—ironic metaphorical compounds—sheds light on the requirements for psychological plausibility of a theory of communication and thus presents a different view of communication and language to that dominant in philosophy of language. In the view presented here, the hearer does not extract the speaker’s communicative intention from the sequence of words in the utterance, but from other channels (gesture, intonation, facial expression), so as to constrain the inferential space for the sentence and speaker meaning. Specifically, we examine an example of ironic metaph...
Phenomenology and Mind, 2016
Slurs are typically defined as conveying contempt based on group-membership. However, here I argu... more Slurs are typically defined as conveying contempt based on group-membership. However, here I argue that they are not a unitary group. First, I describe two dimensions of variation among derogatives: how targets are identified, and how offensive the term is. This supports the typical definition of slurs as opposed to other derogatives. I then highlight problems with this definition, mainly caused by variable offence across slur words. In the process I discuss how major theories of slurs can account for variable offence, and conclude that contempt based on group-membership doesn’t cover all the data. I finish by noting that the most offensive slurs are those that target oppressed groups. I claim it is oppression that underpins most offence, and that beyond this offensive property, some slurs are actively used to oppress.
Reclamation is the phenomenon of an oppressed group repurposing language to its own ends. A case ... more Reclamation is the phenomenon of an oppressed group repurposing language to its own ends. A case study is reclamation of slur words. Popa-Wyatt and Wyatt (2018) argued that a slurring utterance is a speech act which performs a discourse role assignment. It assigns a subordinate role to the target, while the speaker assumes a dominant role. This pair of role assignments is used to oppress the target. Here the author focuses on how reclamation works and under what conditions its benefits can stabilise. She starts by reviewing the data and describing preconditions and motivations for reclamation. Can reclamation be explained in the same basic framework as regular slurring utterances? She argues that it can. The author also identifies some features that must be a prediction of any theory of reclamation. She concludes that reclamation is an instance of a much broader class of acts we do with words to change the distribution of power: it begets power, but it also requires it.
Philosophy, 2020
Slurring is a type of hate speech meant to harm individuals simply because of their group members... more Slurring is a type of hate speech meant to harm individuals simply because of their group membership. It not only offends but also causes oppression. Slurs have some strange properties. Target groups can reclaim slurs, so as to express solidarity and pride. Slurs are noted for their “offensive autonomy” (they offend regardless of speakers’ intentions, attitudes, and beliefs) and for their “offensive persistence,” as well as for their resistance to cancellation (they offend across a range of contexts and utterances). They are also noted for their “offense variation” (not all slurs offend equally) and for the complicity they may induce in listeners. Slurs signal identity affiliations; they cue and re-entrench ideologies. They subordinate and silence target members and are sometimes used non-derogatorily. Slurs raise interesting issues in the philosophy of language and linguistics, social and political philosophy, moral psychology, and social epistemology. The literature on slurs also ...
Philosophical Studies, 2016
International Review of Pragmatics, 2014
Two rival accounts of irony claim, respectively, that pretence and echo are independently suffici... more Two rival accounts of irony claim, respectively, that pretence and echo are independently sufficient to explain central cases. After highlighting the strengths and weaknesses of these accounts, I argue that an account in which both pretence and echo play an essential role better explains these cases and serves to explain peripheral cases as well. I distinguish between “weak” and “strong” hybrid theories, and advocate an “integrated strong hybrid” account in which elements of both pretence and echo are seen as complementary in a unified mechanism. I argue that the allegedly mutually exclusive elements of pretence and echo are in fact complementary aspects enriching a core-structure as follows: by pretending to have a perspective/thought F, an ironic speaker U echoes a perspective/thought G. F is merely pretended, perhaps caricaturised or exaggerated, while G is real/possible.
Journal of Open Humanities Data, 2021
Journal of Open Humanities Data, 2021
Harmful language is frequent in social media, in particular in spaces which are considered anonym... more Harmful language is frequent in social media, in particular in spaces which are considered anonymous and/or allow free participation. In this paper, we analyze the language in a Telegram channel populated by followers of former US President Donald Trump. We seek to identify the ways in which harmful language is used to create a specific narrative in a group of mostly like-minded discussants. Our research has several aims. First, we create an extended taxonomy of potentially harmful language that includes not only hate speech and direct insults (which have been the focus of existing computational methods), but also other forms of harmful speech discussed in the literature. We manually apply this taxonomy to a large portion of the corpus, including the time period leading up to and the aftermath of the January 2021 US Capitol riot. Our data gives empirical evidence for harmful speech, such as in/outgroup divisive language and the use of codes within certain communities, that have not often been investigated before. Second, we compare our manual annotations of harmful speech to several automatic methods for classifying hate speech and offensive language, namely list-based and machine-learning-based approaches. We find that the Telegram data sets still pose particular challenges for these automatic methods. Finally, we argue for the value of studying such naturally-occurring, coherent data sets for research on online harm and how to address it in linguistics and philosophy.
Philosophical Studies
Slurring is a kind of hate speech that has various effects. Notable among these is variable offen... more Slurring is a kind of hate speech that has various effects. Notable among these is variable offence. Slurs vary in offence across words, uses, and the reactions of audience members. Patterns of offence aren't adequately explained by current theories. We propose an explanation based on the unjust power imbalance that a slur seeks to achieve. Our starting observation is that in discourse participants take on discourse roles. These are typically inherited from social roles, but only exist during a discourse. A slurring act is a speech-act that alters the discourse roles of the target and speaker. By assigning discourse roles the speaker unjustly changes the power balance in the dialogue. This has a variety of effects on the target and audience. We show how these notions explain all three types of offence variation. We also briefly sketch how a role and power theory can help explain silencing and appropriation. Explanatory power lies in the fact that offence is correlated with the perceived unjustness of the power imbalance created.
Philosophical views of language have traditionally been focused on notions of truth. This is a re... more Philosophical views of language have traditionally been focused on notions of truth. This is a reconstructive view in that we try to extract from an utterance in context what the sentence and speaker meaning are. This focus on meaning extraction from word sequences alone, however, is challenged by utterances which combine different types of figures. This paper argues that what appears to be a special case of ironic utterances-ironic metaphorical compounds-sheds light on the requirements for psychological plausibility of a theory of communication and thus presents a different view of communication and language to that dominant in philosophy of language. In the view presented here, the hearer does not extract the speaker's communicative intention from the sequence of words in the utterance, but from other channels (gesture, intonation, facial expression), so as to constrain the inferential space for the sentence and speaker meaning. Specifically, we examine an example of ironic metaphor discussed by Stern (2000). He argues that ironic content is logically dependent on metaphorical content, but makes no claims about how psychologically plausible this is in terms of the processing order. We argue that a straightforward translation of logical order into temporal order makes little sense. The primary sticking point is that without a prior understanding of the speaker's communicative intentions, it is computationally more challenging to process the sub-component meanings. An alternative solution based on communicative channels leads us to a more psychologically plausible account of the structure of communicative acts and intentions. This provides support for the psychological realism of a richer theory of communicative intent.
Harmful and dangerous language is frequent in social media, in particular in spaces which are con... more Harmful and dangerous language is frequent in social media, in particular in spaces which are considered anonymous and/or allow free participation. In this paper, we analyse the language in a Telegram channel populated by followers of Donald Trump, in order to identify the ways in which harmful language is used to create a specific narrative in a group of mostly like-minded discussants. Our research has several aims. First, we create an extended taxonomy of potentially harmful language that includes not only hate speech and direct insults, but also more indirect ways of poisoning online discourse, such as divisive speech and the glorification of violence. We apply this taxonomy to a large portion of the corpus. Our data gives empirical evidence for harmful speech such as in/out-group divisive language and the use of codes within certain communities which have not often been investigated before. Second, we compare our manual annotations to several automatic methods of classifying hate speech and offensive language, namely list based and machine learning based approaches. We find that the Telegram data set still poses particular challenges for these automatic methods. Finally, we argue for the value of studying such naturally occurring, coherent data sets for research on online harm and how to address it in linguistics and philosophy.
We provide a new text corpus from the social medium Telegram, which is rich in indirect forms of ... more We provide a new text corpus from the social medium Telegram, which is rich in indirect forms of divisive speech. We scraped all messages from one channel of supporters of Donald Trump, covering a large part of his presidency from late 2016 until January 2021. The discussion among the group members over this long time period includes the spread of disinformation, disparaging of out-group members, and other forms of offensive speech. To encourage research into such practices of poisoning public political discourse, we added automatic annotations of offensive language to all messages. We further added manual annotations of harmful language to a portion of the posts in order to enable the analysis of more implicit forms of online harm.
Speech can be used to change societies in bad ways. It supports institutional oppression, establi... more Speech can be used to change societies in bad ways. It supports institutional oppression, establishes new oppressive norms, silences opponents, spreads disinformation and propagates feelings of hate. Online communities magnify the effects of individual speech acts. We'll look at social norms and institutions, silencing and free speech, social meaning, norm-shifting and disinformation. We'll seek answers to how oppressive speech works and how to defend against it.
"The social institution of discursive norms" L. Townsend, P. Stovall, and H. B. Schmid (Ed.). Routledge/Taylor & Francis., 2021
Uploads
Papers by Mihaela Popa-Wyatt