Academia.eduAcademia.edu

Introduction to the Science of Meaning

2018, Oxford Scholarship Online

This Introduction aims to acquaint the reader with some of the main views on the foundations of natural language semantics, to discuss the type of phenomena semanticists study, and to give some basic technical background in compositional model-theoretic semantics necessary to understand the chapters in this collection. Topics discussed include truth conditions, compositionality, context-sensitivity, dynamic semantics, the relation of formal semantic theories to the theoretical apparatus of reference and propositions current in much philosophy of language, what semantic theories aim to explain, realism, the metaphysics of language and different views of the relation between languages and speakers, and the epistemology of semantics.

OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi The Science of Meaning Essays on the Metatheory of Natural Language Semantics edited by Derek Ball and Brian Rabern 1 OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi 3 Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © the several contributors 2018 The moral rights of the authors have been asserted First Edition published in 2018 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2017961365 ISBN 978–0–19–873954–8 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi Contents Contributors Introduction to the Science of Meaning Derek Ball and Brian Rabern vii 1 1. What is—or, for that matter, isn’t—‘experimental’ semantics? Pauline Jacobson 46 2. Axiomatization in the Meaning Sciences Wesley H. Holliday and Thomas F. Icard, III 73 3. David Lewis on Context Robert Stalnaker 98 4. From Meaning to Content: Issues in Meta-Semantics François Récanati 113 5. Reviving the Parameter Revolution in Semantics Bryan Pickel, Brian Rabern, and Josh Dever 138 6. Changing Notions of Linguistic Competence in the History of Formal Semantics Barbara H. Partee 7. Lexical Meaning, Concepts, and the Metasemantics of Predicates Michael Glanzberg 8. Interpretation and the Interpreter: On the Role of the Interpreter in Davidsonian Foundational Semantics Kathrin Glüer 9. Expressing Expectations Inés Crespo, Hadil Karawani, and Frank Veltman 172 197 226 253 10. Fregean Compositionality Thomas Ede Zimmermann 276 11. Semantic Typology and Composition Paul M. Pietroski 306 12. Semantics as Model-Based Science Seth Yalcin 334 13. Semantic possibility Wolfgang Schwarz 361 14. Semantics as Measurement Derek Ball 381 Index 411 OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi Contributors Derek Ball, University of St Andrews Inés Crespo, Institut Jean-Nicod and New York University, Paris Josh Dever, University of Texas, Austin Michael Glanzberg, Northwestern University Kathrin Glüer, Stockholm University Wesley H. Holliday, University of California, Berkeley Thomas F. Icard, III, Stanford University Pauline Jacobson, Brown University Hadil Karawani, ZAS Berlin Barbara H. Partee, University of Massachusetts, Amherst Bryan Pickel, University of Edinburgh Paul Pietroski, Rutgers, The State University of New Jersey Brian Rabern, University of Edinburgh François Récanati, Institut Jean-Nicod, Paris Wolfgang Schwarz, University of Edinburgh Robert Stalnaker, Massachusetts Institute of Technology Frank Veltman, University of Amsterdam Seth Yalcin, University of California, Berkeley Thomas Ede Zimmermann, Goethe-Universität Frankfurt OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi Introduction to the Science of Meaning Derek Ball and Brian Rabern By creating certain marks on paper, or by making certain sounds—breathing past a moving tongue—or by articulation of hands and bodies, language users can give expression to their mental lives. With language we command, assert, query, emote, insult, and inspire. Language has meaning. This fact can be quite mystifying, yet a science of linguistic meaning—semantics—has emerged at the intersection of a variety of disciplines: philosophy, linguistics, computer science, and psychology. Semantics is the study of meaning. But what exactly is “meaning”? What is the target of semantic theory? There is a very wide range of views on the issue, ranging from the claim that semantics studies something psychological (e.g. a semantic faculty or brain processing) to the claim that semantics studies something about conventions of the linguistic community; to the claim that semantics studies abstract mathematical structures—among many other possibilities. And even if we knew the target, we would face a range of further questions: how should we try to characterize the target? What would a theory of the target look like? The aims of this Introduction are to acquaint the reader with some of the main views different theorists have taken on these difficult issues; to discuss the type of phenomena semanticists study; and to give some basic technical background in compositional model-theoretic semantics necessary to understand the chapters in this collection. We begin with the last of these tasks. I.1 Basics of Formal Semantics While foundational issues remain highly controversial, there is a dominant approach to first-order semantic theorizing, which takes the form of a compositional, modeltheoretic, and truth-conditional (or interpretive) semantics. We first present some of the important pillars of this style of approach, before turning to the meta theoretical issues. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern I.1.1 Truth conditions Consider the alarm calls of certain primates (e.g. West African guenons). These monkeys produce different vocalizations to represent the presence of different predators (see Seyfarth et al. 1980; Schlenker et al. 2014). For example, one such group tends to make the sound “hok” when there is an eagle (or an eagle-like disturbance) present, and they tend to make the sound “krak” when there is a leopard (or an leopardlike disturbance) present. Upon hearing “hok” and “krak” the monkeys respond in certain appropriate ways to the different threats, by hiding in the bushes or by climbing up the trees. Here it is very natural to describe the situation by saying that “hok” means there is an eagle nearby, while “krak” means there is an leopard nearby. The different expressions “hok” or “krak” (call these sentences, if you like) correctly apply to different types of threat situations—the conditions required for their truth are different: “hok” is true in a situation just in case there is an eagle nearby, while “krak” is true in a situation just in case there is a there is a leopard nearby. The situations relative to which a call would be true are its truth conditions. Using denotation brackets, !.", to map a call type to the set of situations in which it is true, we can write the following:1 • !hok" = {v | there is an eagle nearby in v} • !krak" = {v | there is a leopard nearby in v} Given this systematic connection between call types and threat situations, “hok” or “krak” can be used to communicate different information.2 Human communication is much more complicated in various respects, but in the same way that the monkey calls make a division of among various threat situations, our declarative utterances make a division among various situations.3 If a speaker utters a declarative sentence, then there is a specific way things would have to be in order to be as the speaker said they were. And hearers who understand the language, then, know which way the speaker said things were. For example, imagine that I’m 1 Here we are loosely following the common notational convention initiated by Dana Scott, where we let the equation !φ "iA = 1 mean “φ is true at point i relative to model A” (Scott 1970: 150–1). See Rabern (2016) for a brief history of this notational convention. 2 Some might insist that such calls are too involuntary and/or stimulus dependent, to have genuine “meaning”: the type of information they encode or the type of “meaning” they have is, one might insist, mere akin to “natural meaning” in the sense of Grice (1957) (e.g. the meaning of the rings of a tree). This is an important issue but we must gloss over it here. See Skyrms (2010) for a detailed discussion of these issues. 3 The idea that informational content or representational content is best understood in terms of dividing up possibilities of some sort is associated with Wittgenstein (1922), and developed by, for example, Stalnaker (1984) and Lewis (1986: §1.4). The view is expressed succinctly by Frank Jackson (2001: 129) as follows: [C]ontent is, somehow or other, construable in terms of a division among possibilities. For to represent how things are is to divide what accords with how things are being represented as being from what does not accord with how things are being represented as being. In slogan form: no division, no representation. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  about to throw two dice, thus displaying two numbers face up.4 As a competent speaker of English you know in which of the following situations (1) would be true. (1) The dice add up to eleven. a. b. c. d. That is, you know that—out of the thirty-six possible states of the pair of dice— (1) is only true in situations b and d. The set of possible situations provides the truth conditions for the sentence, which we can indicate (ignoring everything about situations except for what the two dice show) as follows: • !The dice add up to eleven" = {b, d} For any such sentence concerning the numbers displayed on the dice a competent speaker could divide the space of scenarios in the relevant ways. In general competent speakers of a language have the ability to judge whether or not sentences are true or false relative to various actual or counterfactual scenarios. Cresswell (1982) went so far as to say that this is the most certain thing we know about “meaning”. Cresswell’s principle: If there is a situation relative to which a sentence φ is true but a sentence ψ is false, then φ and ψ differ in meaning. The idea that the meaning (or sense) of a sentence is intimately connected to its truth conditions is historically associated with both Frege and Wittgenstein: it is determined under which conditions [a sentence] refers to the True. The sense of this [sentence], the thought, is: that these conditions are fulfilled. (Frege 1893: §32) Einen Satz verstehen, heißt, wissen was der Fall ist, wenn er wahr ist. (Wittgenstein 1922: §4.024)5 Frege, Russell, and early Wittgenstein were primarily concerned with formal or ideal languages, which could serve certain purposes in the foundations of science and mathematics.6 This focus on formal languages was propagated by Carnap and the logical positivists. An important moment in this history was Tarski’s presentation at the 1935 Scientific Philosophy congress in Paris called “Foundations of Scientific Semantics”—where he first aired his formal theory of truth (Tarski 1936). This work planted the seeds for a semantic revolution. 4 This type of illustration is inspired by Kripke (1980: 15–17) (see the passage quoted in footnote 10) and the discussion in Portner (2005: 12–20). 5 In English: “To understand a sentence means to know what is the case, if it is true.” The passage continues, “(One can understand it, therefore, without knowing whether it is true.)” See also Wittgenstein (1922): §4.26–4.5). 6 For some of the history here see, for example, Partee (2011) and Harris (2017), and the references therein. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Consider the formal language of propositional logic. First we define the formal syntax. Let S be an infinite set of sentence letters = {p, q, r, . . . }, such that for α ∈ S the language is generated by the following grammar:7 φ ::= α | ¬φ | (φ ∧ φ) For this formal language we then provide a semantics. A truth-value (0 or 1) for a sentence is given relative to a valuation or an interpretation—an interpretation is a function v : S × {0, 1}. For every sentence of the language we can define its truthvalue relative to an interpretation v as follows: • !α "v = 1 iff v(α) = 1 • !¬φ "v = 1 iff !φ "v = 0 • !(φ ∧ ψ)"v = 1 iff !φ "v = 1 and !ψ "v = 1 We can use this framework to pair each sentence with its truth conditions. For example, the truth conditions of (p ∧ q) are given by the set of interpretations that make it true (cf. Wittgenstein 1922: §4.26-4.5):8 !(p ∧ q)" = {v | v(p) = 1 and v(q) = 1} Notice that here the “way things would have to be” in order for (p ∧ q) to be true is cashed out in terms of constraints on the interpretations (or models). The different interpretations can be—and have been, for example by Carnap (1947)—understood as different “possible worlds”.9 In the 1960s, Donald Davidson (1967) and Richard Montague (1968, 1970b, 1973), developed strategies for transplanting Tarski’s (1936) definition of truth and the 7 We provide the formation rules using a version of the convenient Backus-Naur notation (Backus et al. 1963). The interpretation is clear, but for purists note that we use α and φ for metavariables instead of defining, say, ⟨atomic sentence⟩ and ⟨sentence⟩. 8 Tautologies are true relative to every interpretation, while contradictions are false relative to every interpretation. 9 Carnap says his “state-descriptions represent Leibniz’s possible worlds or Wittgenstein’s possible states of affairs” (Carnap 1947: 9). There are difficult issues here concerning the distinction between different ways things could have been versus different things expressions could have meant, which we will gloss over. Moving towards contemporary possible worlds semantics we would enrich the models with classes of interpretations and relations between such classes, or with indices and a binary accessibility relation between the indices. The genesis of possible world semantics took place in the mid-twentieth century, in works such as Carnap (1947); Prior (1956, 1957); Hintikka (1957); Kripke (1959, 1963); and Montague (1960). See Copeland (2002) for a detailed account of this history. We also won’t worry here over the nature of possible worlds (for that, see Lewis 1986 or Menzel 2016) except to quote this helpful passage from Kripke: Two ordinary dice (call them A and B) are thrown, displaying two numbers face up. For each die, there there are six possible results. Hence there are thirty-six possible states of the pair of dice, as far as the number shown face-up are concerned…We all learned in school how to compute the probabilities of various events…Now in doing these school exercises in probability, we were in fact introduced at a tender age to a set of (miniature) ‘possible worlds’. The thirty-six possible states of the dice are literally thirty-six ‘possible worlds’ . . . . (Kripke 1980: 16) OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  related semantical and logical notions from formal languages to natural languages.10 Montague’s strategy is model theoretic: it aims to characterize meaning by associating linguistic elements with elements of mathematical structures. Davidson’s strategy, on the other hand, eschews relativization to model-theoretic structures, and instead characterizes meaning in terms of disquotational T-sentences.11 But the general idea on either approach is that a key aim of a semantic theory is to pair declarative sentences with their truth conditions. This is the approach developed in contemporary semantics textbooks such as Larson and Segal (1995); Heim and Kratzer (1998); Chierchia and McConnell-Ginet (2000); and Jacobson (2014). Yet, the aim is not simply to pair sentences with the conditions in which they’d be true; a simple list could not be a semantic theory for any interesting language. The aim is to pair sentences with truth conditions in a particularly systematic way, which we develop in more detail in section I.1.2. I.1.2 The principle of compositionality In the following often quoted passage Gottlob Frege (1923/1963) makes an important observation about the productivity of language: It is astonishing what language can do. With a few syllables it can express an incalculable number of thoughts, so that even a thought grasped by a human being for the very first time can be put into a form of words which will be understood by someone to whom the thought is entirely new. Speakers of a language are able to produce sentences which they have never before produced, the utterances of which are understandable by speakers of a language who have never before encountered the sentence. For example, over the many years that humans have been speaking a language we can be fairly confident that no one has ever uttered the following sentence (or even a sentence synonymous with it): (2) A surrealist painter and a young French mathematician landed on the icy surface of the largest moon of Saturn. 10 For example Montague states “I regard the construction of a theory of truth—or rather, of the more general notion of truth under an arbitrary interpretation—as the basic goal of serious syntax and semantics” (Montague and Thomason 1974: 188), while Davidson likewise insists that “the semantical concept of truth” provides a “sophisticated and powerful foundation of a competent theory of meaning” (Davidson 1967: 310). See also Lewis (1970). 11 This is not to suggest that theories can always be cleanly divided into Montagovian or Davidsonian. Heim and Kratzer’s very Montagovian textbook, at points, adopts some very Davidsonian positions: Only if we provide a condition do we choose a mode of presentation that ‘shows’ the meaning of the predicates and the sentences they occur in. Different ways of defining the same extensions, then, can make a theoretical difference. Not all choices yield a theory that pairs sentences with their truth-conditions. Hence not all choices lead to a theory of meaning. (Heim and Kratzer 1998: 22) See Yalcin’s Chapter 12 in this volume for discussion of this. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Yet, all competent English speakers immediately understand it, and know what would have to be the case for it to be true. Relatedly, our language seems systematic in the following sense: if a speaker understands “David loves Saul” then they also understand “Saul loves David”. Understanding a sentence seems to involve some competence with different ways of putting those parts together. Such linguistic phenomena call for explanation. The hypothesis that natural languages are “compositional” is standardly thought to be the best explanation. Principle of compositionality: The meaning of an expression is determined by the meanings of its parts and the way they are syntactically combined. If the language is compositional, then this is thought to explain how a competent speaker of a language can compute the meanings of novel sentences from the meanings of their parts plus their structure. But exactly what an explanation of these phenomena require, and exactly what sorts of theory should count as compositional, remain controversial issues. (See Zimmermann (Chapter 10, this volume) for relevant discussion of the nature of, and motivation for, compositionality.) In order to spell out the compositionality constraint in more detail let’s continue to adopt the methods of the model-theoretic tradition. The model-theoretic tradition, following Montague and Lewis, often makes use of type theory.12 The idea has two parts. First, atomic expressions are assigned as their semantic values entities of a particular type: for example, an element of some domain. Second, composition rules are specified that determine the semantic value of a complex expression on the basis of the semantic values of its components. Typically, the most basic of these rules is function application, which can be stated informally as follows: the semantic value of a complex expression is the result of applying the semantic value of one of its immediate syntactic constituents to the semantic value(s) of the other(s). The idea that all semantic composition proceeds in this way has been called Frege’s conjecture, since he said “it is a natural conjecture that logical combination of parts into a whole is always a matter of saturating something unsaturated” (Frege 1923). For a toy example, consider again the language of propositional logic. The basic type t is the type for propositions or sentences, the type of expressions whose semantic values are truth-values. We follow tradition in characterizing other available types recursively as follows (but see Pietroski’s Chapter 11 in this volume for criticism of the claim that typologies of this kind are suitable for natural language semantics): • If a and b are semantic types, then ⟨a, b⟩ is a semantic type (the type of expressions whose semantic values are functions from the semantic values of expressions 12 See Lewis (1970) and Montague (1973). The essential ideas relating to categorial grammars stretch back to Ajdukiewicz (1935) and Bar-Hillel (1953). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  of type a to the semantic values of expressions of type b). And nothing else is a type.13 We will assume that the semantic value of any atomic sentence is whatever truth value is provided by the interpretation v. !α" = v(α), for all sentence letters α ∈ S We will follow the tradition in natural language semantics of assuming that all syntactic branching is binary (so that any complex expression will have at most two immediate constituents) (Heim and Kratzer 1998).14 Given this assumption, Frege’s conjecture suggests the following composition rule: Functional application: If α is node with branches {β, γ }, and !γ " is in the domain of !β", then !α" = !β"(!γ "). It remains to state the semantic values of the logical constants. Consider a sentence of propositional logic such as ¬r. Sentences are type t, so !r" is a truth-value, either 0 or 1 (we suppress the interpretation parameter v, which maps atomic sentences to 0 or 1). The sentence ¬r is also type t. Thus, it is clear that the semantic value of “¬” must be type ⟨t, t⟩, a function which takes a truth value and maps it to a truth value—it is a truth-functional operator after all. In particular, it will be the function that takes a truth value and gives back the opposite truth value. In lambda notation: !¬" = λpt .1 − p Conjunction will work in a similar way. Consider a conjunction such as (s ∧ r). It is type t, and so are both its conjuncts—both !s" and !r" are either 0 or 1. Thus, the semantic value of ‘∧’ must be a function that takes truth values as input and delivers a truth value as output. A natural first thought would be that the semantic value of a conjunction is a two place function, which maps two truth values to a truth value. But are assuming that all syntactic branching is binary, so that the syntactic structure of (s ∧ r) will look something like this: s ^ r Since our composition rule assumes that the semantic value of a complex expression is determined by its immediate constituents, and since the immediate constituents of (s ∧ r) are s and ∧r, the semantic value of the sentence must be a function 13 Note that we have made the simplifying assumption that there are no intensional types, for example functions from interpretations to truth values or “possible worlds” to truth-values. Such types could easily be introduced, if one wanted to raise the sentence types to be functions from worlds to truth values. 14 Many earlier theories in syntax (and semantics) did not assume that all branching was binary, but many current theories take it as a theoretical constraint (cf. Kayne 1983); for example, the assumption of binary branching is a guiding principle of the Minimalist Program (see Chomsky 1995). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern of !s" and !∧ r". So !∧" must be something that can combine with !r" (a truth value) to produce a function from truth values to truth values—something of type ⟨t, ⟨t, t⟩⟩. In lambda notation: !∧" = λqt [λpt . p × q] We now have the materials to compute the semantic values of complex sentences. For example, consider a complex sentence such as ¬(s ∧ r). Its semantic value is computed as follows: ! " !¬(s ∧ r)" = 1 iff !¬" !(s ∧ r)" = 1 iff 1 − !(s ∧ r)" = 1 ! "! " iff 1 − !∧" !r" !s" = 1 # $! " iff 1 − λpt .p × !r" !s" = 1 iff 1 − !s" × !r" = 1 iff v(s) × v(r) = 0 That’s how composition by functional application works on a simple formal language. Let’s turn our attention to something more reminiscent of a natural language. Assume the language at issue only has the type of sentences one finds in children’s “reader books” such as the following: (3) Ansel runs. (4) Hazel loves Ansel. (5) Everyone loves Hazel. (6) Hazel runs and Ansel runs. To generate these sentences assume the lexicon is given by the set {everyone, not, and, Ansel, Hazel, runs, loves}, and let the well-formed sentences of the language be provided by the following grammar: ⟨sentence⟩ ::= ⟨name⟩⟨predicate⟩ | Everyone ⟨predicate⟩ | ⟨sentence⟩ and ⟨sentence⟩ | not ⟨sentence⟩ ⟨name⟩ ::= Ansel | Hazel ⟨predicate⟩ ::= runs | loves ⟨name⟩ As desired, this grammar yields that sentences ((3))–((6)) are grammatical. To define the semantics we first recursively define the semantic types and provide their domains. The types are as follows: • • • • e and t are types If x and y are types, then ⟨x, y⟩ is a type If x is a type, then ⟨s, x⟩ is a type Nothing else is a type OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Each type is associated with a particular kind of semantic value: a semantic domain. The semantic domain Dx for a type x is defined in terms of the set of individuals D, a set of possible situations or worlds W, and the set of truth values {0, 1}. (i) (ii) (iii) (iv) De = D Dt = {0, 1} D⟨x,y⟩ = the set of all functions from Dx to Dy , for any types x and y D⟨s,x⟩ = the set of all functions from W to Dx , for any type x Clause (iv) lets us describe expressions which have as their semantic values functions from possible worlds to other entities; for example, on one prominent view, the semantic values of sentences are functions from worlds to truth values. This kind of view is often motivated by the role it can play in giving a semantics for expressions like “might” and “believes”, which seem to take sentential complements, but are not simply truth-functional operators. Our toy language lacks intensional constructions of this kind; it has no expressions that can be given a compositional semantic treatment only by (for example) quantifying over worlds. But this does not mean that relativizing to possible worlds is superfluous. We are assuming that the semantics should determine the truth conditions for each sentence, and one good way of representing truth conditions is by a set of possible worlds (the worlds relative to which the sentence would be true). That is, if the truth conditions for a sentence are what would have to be the case for the sentence to be true, then truth conditions make divisions among the the space of possibilities.15 Thus, one might insist that the semantic theory should yield intensions regardless of whether or not the language has intensional constructions (see Yalcin’s Chapter 12 in this volume for a nice discussion of this point).16 15 Dowty et al. (1981) state giving the truth conditions for a sentence we make reference to ‘how the world would have to be’ in order for the sentence to be true. Thus the meaning of a sentence depends not just on the world as it in fact is, but on the world as it might be, or might have been, etc.—i.e., other possible worlds. (p. 12) Of course, there are alternative understandings of “truth conditions”, in particular the Davidsonian conception where truth conditions are material biconditionals of a special sort (cf. Larson and Segal 1995). Even if such a framework can meet its local explanatory aims—providing an explanation of linguistic knowledge—there might be other reasons to insist that the semantic theory should yield intensional types, for example in order to plug into various “postsemantic” theories such as a Stalnakerian pragmatics (Stalnaker 2014). But see Glanzberg (2009) who argues against “the claim that the relativity of truth to a world plays any empirically significant role in semantics” (p. 300). Note also that some Davidsonians do end up appealing to possible worlds, but they only do so to accommodate attitude or modal constructions, for example Larson and Segal (1995: §11.2.2), who provide clauses such as the following: Val(x, jumps, w) iff x is jumps in w. 16 This is not to say that there might be human languages that have no intensional constructions— presumably there aren’t any grammatically tense- , mood-, and aspect-less human languages. But there are other representational systems whose semantics involve intensional types even though they lack intensionality, for example the monkey calls, discussed in Schlenker et al. (2014), or the semantics of pictures discussed in Blumson (2010) and Greenberg (2013). There is a lengthy discussion of hypothetical languages that lack intensional constructions in Dever (1998: see §2.3.4.2). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern With this clarification in place, we will proceed to describe semantic values relative to a world—!φ "w (where !φ " is a function from worlds to some other entity, and !φ "w is !φ " applied to w), and we will likewise state extensional composition rules that compose semantic values relative to a world. Consider a simple subject-predicate declarative sentence of our toy language such as: (7) Hazel runs. Since we are composing values relativized to a world, a sentence such as (7) is presumably type t, and we will assume that a simple proper name such as “Hazel” or “Ansel” is of type e. !Ansel"w = Ansel !Hazel"w = Hazel What semantic value should we assign to “runs” in light of these hypotheses? It depends on how we want the semantic value of “runs” to combine with the semantic value of “Hazel”. As discussed, we take on the conjecture that the manner in which semantic values combine is by function application. So, the semantic value of “runs” must be a function which takes an individual (like Hazel) and maps it to a truth value, so type ⟨e, t⟩; in particular, that function from individuals to truth values whose value is 1 just in case the individual runs and false otherwise, which we can specify as follows: % 1 if x runs in w !runs"w = the function h such that given any x ∈ De , h(x) = 0 otherwise Or following our convention of specifying functions in lambda notation we will simply write:17 !runs"w = λxe . x runs in w Likewise the semantic value of “loves” must be a function which takes an individual, to a function from individuals to truth values, type ⟨e, ⟨e, t⟩⟩. 17 Note that here we use lambda notation in our (mathematically extended) English metalanguage as an informal notation for describing functions—in contrast to Montague (1973) or Dowty et al. (1981) where English is first translated into a formal language of the lambda calculus, for example Lλ , and then the formal language is given a semantic interpretation (see Dowty et al. 1981: 98–111). This follows standard practice, see for example Heim and Kratzer (1998: §2.1.3 and §2.5). The basic convention here is that the notation “λυ[χ ]” (or using dots to cut down on brackets “λυ.χ ”) abbreviates “the function that maps every υ to χ ”. But since this clashes with the grammatical difference between a sentence and a name, when χ is sentential let “λυ[χ ]” instead abbreviate “the function which maps every υ to 1 if χ , and to 0 otherwise”, for example for f = λx[x + 2], f (2) = 4 but for g = λx[x = 2], g(2) = 1. We could avoid the ambiguity in the notation by forcing all the sentential constructions into the form of descriptions of a truth-value, # for example !runs"w = λxe .the number n such that (x runs in w → n = 1) ∧ (x doesn’t run in w → $ n = 0) , but the point of the convention is abbreviation. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  # $ !loves"w = λxe λye . y loves x in w But what about the quantifier phrase? Since it combines with a predicate to form a sentence, and predicates are type ⟨e, t⟩, it must be type ⟨⟨e, t⟩, t⟩—a predicate of predicates. # $ !everyone"w = λf⟨e,t⟩ ∀xe (f (x) = 1) We complete the lexical entries by providing negation and conjunction their standard boolean operations. !not"w = λpt .#1 − p $ !and"w = λqt λpt . p × q Consider a sentence of the language with the types and values just assigned. t 〈〈e, t〉, t〉 〈e, 〈e, t〉〉 Everyone loves e Hazel Again we compose via a single function application rule: Functional application: If α is node with branches {β, γ }, and !γ "w is in the domain of !β"w , then !α"w = !β"w (!γ "w ). We can compute the the semantic value of the sentence relative to a world as follows: ! " !Everyone loves Hazel"w = 1 iff !Everyone"w !loves Hazel"w = 1 ! " iff ∀xe !loves Hazel"w (x) = 1 ! " iff ∀xe [λye .y loves Hazel in w](x) = 1 ! " iff ∀xe x loves Hazel in w In this way the semantic values of the basic expressions of the language plus the composition rule determine for each sentence of the language its semantic value— where the semantic value of a sentence determines the truth value that the sentence would have relative to any possible world. In this case the compositionally derived truth conditions are the following: ! " !Everyone loves Hazel" = {w | ∀xe x loves Hazel in w } That, at least, is the basic idea. This strategy can be extended in various ways to cover more sophisticated fragments of natural language. Although it is straightforward to add other quantifier phrases, such as “someone” and “no one”, and to work out the internal composition of quantifier phrases (“every boy”, “the boy”), immediate OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern issues will arise with quantifier phrases in object position, for example “Hazel loves everyone”, relative clauses, and variable binding. By now such issues are well-known, with various competing solutions, and the strategy has been extended well beyond such relatively simple constructions (see collections such as Portner and Partee 2002; and Partee 2004). It remains controversial whether natural language has a compositional semantics, and what such a semantics would look like if it does. And there are many types of constructions which have been alleged to present a challenge to compositionality, for example anaphora, idioms, quotation, and attitude reports. Yet, it has proved useful to treat compositionality (construed in something like the way we have just described) as a sort of methodological hypothesis—lots of interesting theories have been produced by treating compositionality as a desideratum, which suggests that there is probably something to it (see Partee 1984, 2004; and Dever 1999).18 I.1.3 Context and discourse The approach to semantics outlined above has proven to be a fruitful and valuable line of linguistic research. There are now sophisticated theories of linguistic phenomena that were not even known to exist mere decades ago. But our discussion so far has ignored some fairly obvious facts about natural languages. We have focused on sentences, but there are certain parts of language for which a semantic approach that puts primary focus on truth and the truth conditions of sentences seems ill suited. For example, sentences such as “That is red” or “I’m spiteful” don’t have truth conditions— they are only true (or false) on particular uses. In this way, the semantic properties of an utterance depend on various complex features of the pragmatic context, for example what objects are demonstrated; who is speaking. Relatedly, phenomena such as anaphora and presupposition projection call into question whether sentences even have truth conditions in abstraction from a particular conversational context, and many theorists have seen this as motivating theories that take the entire discourse or text, instead of an isolated sentence, as the basic object of semantic inquiry. Other sentences, while perhaps being truth-apt, seem to involve indeterminacy such that they are neither true nor false on particular uses, for example vague 18 An interesting alternative way to construe the methodological role of compositionality is the following suggestion from Dowty: To put the focus and scope of research in the right place, the first thing to do is to employ our terminology differently. I propose that we let the term natural language compositionality refer to whatever strategies and principles we discover that natural languages actually do employ to derive the meanings of sentences, on the basis of whatever aspects of syntax and whatever additional information (if any) research shows that they do in fact depend on. Since we don’t know what all those are, we don’t at this point know what “natural language compositionality” is really like; it’s our goal to figure that out by linguistic investigation. Under this revised terminology, there can be no such things as “counterexamples to compositionality”, but there will surely be counterexamples to many particular hypotheses we contemplate as to the form that it takes. (Dowty 2007: 27) OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  predicates (“Alex is bald”), future contingents (“There will be a sea battle tomorrow”). Even worse it seems is the threat from “subjective” language: some sentences don’t seem to concern “matters of fact”. Consider language concerned with aesthetics (“The sculpture is beautiful”), taste (“Liquorice is tasty”), or morality (“Stealing is wrong”). Does it make sense to ask under what conditions such sentences would be true? Non-declarative sentences pose another threat; it is unnatural at best to ascribe truth or falsity to a question such as “Who ate the tempeh?”, to a command such as “Eat your broccoli”, an exclamation such as “Ouch!”, or a greeting such as “Hello”. Can an approach such as the one outlined above hope to account for the semantic properties of the aspects of language that diverge from the paradigm of objective context-insensitive declarative sentences? And this list of challenges only scratches the surface: metaphor, irony, slurs, epistemic language (modals, indicative conditionals, probability), among other phenomena, will raise further questions. Historically the semantic tradition stemming from Montague and Lewis gave theoretical importance to truth conditions, but this is arguably inessential to the basic strategy of this tradition. The formal methods developed by logicians and analytic philosophers that were applied to natural language were originally developed for certain specific projects (e.g. the reduction of mathematics to logic). Given these aims, the key focus was truth conditions and entailment; and it is certainly the context-insensitive, precise, objective, declarative fragment of a natural language that is most amiable to a semantic treatments by such methods. But when this approach was extended to cover larger and more diverse fragments of natural language new theoretical tools were developed and certain commitments were reconsidered, as one would expect with any maturing science. In many cases “truth-conditions” per se were removed from their central place in the semantic account. But, importantly, there remains a primary focus on something like “satisfaction” or “fulfilment” conditions, or at least some kind of mapping from language to model-theoretic interpretations, and all of this generated by type-theoretic composition rules.19 We will outline some of these developments concerning context-dependence below, highlighting the way in which the accounts make use of the model-theoretic interpretive resources.20 19 For example, even interrogatives and imperatives get a compositional model-theoretic analysis of this sort. Montague (1973) acknowledged the necessary generalization: when only declarative sentences come into consideration, it is the construction of [truth and entailment conditions] that should count as the central concern of syntax and semantics. In connection with imperatives and interrogatives truth and entailment conditions are of course inappropriate, and would be replaced by fulfilment conditions and a characterization of the semantic content of a correct answer. (Montague and Thomason 1974: 248) Such an extension of Montague’s framework to interrogatives was carried out shortly thereafter, for example see Hamblin (1973) and Karttunen (1977). For work on imperatives, see, for example, Charlow (2014) and Portner (2016), and the references therein. 20 One could likewise point to analysis of the other challenging cases mentioned that make use of the model-theoretic interpretive resources: for indeterminacy see, for example, Kennedy (2007) and OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern i... context sensitivity Strawson famously said “ordinary language has no exact logic” (1950: 344), and one of the reasons he thought this was due to the pervasive context-sensitivity of ordinary language—sentences themselves do not stand in entailment relations, instead one must look to uses of sentences in a context. This pessimistic attitude towards natural language was not only held by anti-formalist Oxford philosophers but also by the formally inclined forefathers of semantics, such as Tarski and Carnap.21 In a 2004 lecture, Kaplan summed up these shared attitudes as follows: When I asked Strawson (I used to live across the street from Strawson) why there could be no logic for a language with indexicals, he said, it was because W. V. Quine had told him so (Quine, a famous logician). Thus there was formed a strange alliance between those who disdained the regimented language and those who preferred it. The point of agreement was the gulf between the logic’s domain and natural language. The alliance was sustained by the notion that in natural language, meaning is determined by use. Strawson asked, in effect, “How could the lumbering formalist capture the context-sensitive, intention-driven quicksilver of individual use,” and the logician replied, “Why would we want to?” (Kaplan 2004) Eventually, however, the logicians found reason to formalize the contextual quicksilver of natural language. Some important first steps were taken in Bar-Hillel (1954), where we find the following plea: “the investigation of indexical languages and the erection of indexical language-systems are urgent tasks for contemporary logicians.” (Bar-Hillel 1954: 369) Prior, who insisted that “tense-distinctions are a proper subject of logical reflection” (1957: 104), worked out an approach to temporal logic using the intensional (or possible world) techniques.22 But this is not the place to trace out the entire interesting history of temporal logic—such an exercise would take us back through medieval discussions and inevitably back further at least to Aristotle’s “sea-battle” passages. See Øhrstrøm and Hasle (1995) for discussion. An interesting feature of these intensional approaches is that a sentence is not just true or false relative to a model (as in, e.g., propositional logic) but also relative to a point of reference (e.g. a world or a time) within a model.23 MacFarlane (2016); for subjective language see, for example, MacFarlane (2014) and Lasersohn (2016); for expressives see, for example, Kaplan (2004) and Potts (2007). 21 See Carnap (1937 1959: §46). Reichenbach (1947) should also be noted for the discussion of the tenses of verbs and token-reflexive expressions, and one should perhaps mention C. S. Pierce who’s tripartite division of signs into Icons, Indices, and Symbols is the source of our technical term “indexical” (he also said, “Time has usually been considered by logicians to be what is called “extra-logical” matter. I have never shared this opinion” (1933: 532). 23 Scott (1970) suggests, “One could call [them] points of reference because to determine the truth of an expression the point of reference must be established…Maybe index is just as good a term, though it seems to me to make them sound rather insignificant” (p. 150). 22 OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Montague approached context-sensitivity in a similar way by analogy with Tarski’s treatment of variables and open formulas. The Tarskian semantics for first-order quantification stems from the insight that variables and open formulas require something additional in order to be interpreted. An open sentence, such as “Fx”, may be true or false relative to different assignments of values to “x”. Thus the semantics is relativized to a sequence of individuals g, and the clauses encode and exploit this relativization. Some basic clauses of the Tarskian semantics as follows:24 • !x"g = g(x) • !π x1 . . . xn "g = 1 iff ⟨g(x1 ), . . . , g(xn )⟩ ∈ I(π ) ′ • !∀xφ "g = 1 iff for all sequences g ′ (that are x-variants of g), !φ "g = 1 In this way a sentence such as “Fx” will be true relative to (or satisfied by) some assignments, and false relative to others. Sentences of natural language with indexical pronouns, such as “She is wise”, are in need of supplementation by something external in much the same way that an open formula is. Thus it is not too far a leap to model the indexical involving sentences of natural language as sentences with free variables—simply construe contextual parameters such as speaker and time as additional inputs to interpretation. Montague (1968, 1970a) called languages with context-sensitive vocabulary “pragmatic languages” and he suggested that a systematic treatment could be achieved by extending the familiar tools. It seemed to me desirable that pragmatics should at least initially follow the lead of semantics— or its modern version, model theory, which is primarily concerned with the notions of truth and satisfaction (in a model, or under an interpretation). Pragmatics, then, should employ similar notions, though here we should speak about truth and satisfaction with respect not only to an interpretation but also to a context of use. (Montague 1970a: 1) With this approach in mind early theorists, for example Montague (1968), Scott (1970), and Lewis (1970), proposed that we simply expand the points of reference (or “indices”) to include the relevant contextual coordinates.25 For example, Scott advised as follows: For more general situations one must not think of the [point of reference] as anything as simple as instants of time or even possible worlds. In general we will have i = (w, t, p, a, . . . ), where 24 Here we suppress the model A = ⟨D, I⟩, where D is a non-empty domain of individuals and I maps predicates to sets of appropriate tuples drawn from D. 25 Davidson (1967) also suggested that for natural language semantics, truth should be relativized to times and persons in order to accommodate tense and demonstratives (see Davidson 1967: 319–20). Also notable in this regard is the “egocentric logic” developed in Prior (1968): If I say, not “Brown is ill” but “I am ill”, the truth of this depends not only on when it is said but on who says it. It has been suggested, e.g. by Donald Davidson 1967 that just as the former dependence has not prevented the development of a systematic logic of tenses, so the latter should not prevent the development of a systematic logic of personal pronouns. (p. 193) OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern the index i has many coordinates: for example, w is a world, t is a time, p = (x, y, z) is a (3-dimensional) position in the world, a is an agent, etc. (Scott 1970: 151) Consider adding the first-personal pronoun to our “reader book” language from above. Syntactically it will figure in the language exactly like a name. Thus, for example, the following sentence will be well-formed: (8) I love Hazel. In order to provide the semantics for “I”, and eventually for the complete sentence, we need first to follow Scott’s advice and construe the point of reference as a pair of an agent and a world instead of simply a world. Then we provide the rule for “I” which says that it refers to the agent of the point of reference: !I"a,w = a All the expressions will carry the relativization to a point of reference just as before— the added relativization to an agent is idle except for expressions involving “I”. Thus, we calculate the truth conditions of (8) as follows: ! " !I love Hazel"a,w = 1 iff !love Hazel"a,w (!I"a,w ) = 1 iff [λye .y loves Hazel in w](!I"a,w ) = 1 iff [λye .y loves Hazel in w](a) = 1 iff a loves Hazel in w This provides the following compositionally derived truth conditions: & !I love Hazel" = {(a, w) | a loves Hazel in w By generalizing on this basic idea all indexical pronouns can be given a natural analysis using the standard compositional model-theoretic resources. Kaplan’s celebrated and highly influential work “Demonstratives” (1989a) incorporates these basic ideas and develops them in many interesting directions. His formal language LD, which is laid out in section XVIII of Kaplan (1989a), is a starting point for most subsequent work on the formal semantics of deictic pronouns. Kaplan made a few novel choices in developing his framework which have influenced much of the subsequent literature. For example, Kaplan distinguished two kinds of meaning: the character and the content of an expression. In Kaplan’s semantic theory these two aspects of meaning play different roles: the content is the information asserted by means of a particular utterance, whereas the character of an expression encodes a rule by which the content of particular utterances of the expression is determined. This led Kaplan to take issue with the notion of a “point of reference” employed by early theorists, claiming that it blurred an important conceptual difference between the “context of utterance” and the “circumstance of evaluation”. In outline, Kaplan’s formal theory is this: the domain of the character function is a set C. Each c ∈ C is a tuple (or determines a tuple) of content-generating parameters—these tuples OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  are called “contexts of utterance”. Character functions map contexts of utterance to contents. The content of an expression is itself a function from a set V to extensions. Each v ∈ V is also a tuple of parameters, often assumed to be possible worlds (or worlds paired with times, locations, agents, etc.)—these are called “circumstances of evaluation”. The resulting Kaplanian picture is as follows: CIRCUMSTANCE: υ EXTENSION: ⟦.⟧c, υ CONTEXT: c CONTENT: ⟦.⟧c CHARACTER: ⟦.⟧ Stalnaker (Chapter 3, this volume) emphasizes that there are two independent, but often conflated, reasons for Kaplan’s insistence on this two-step procedure:26 • a linguistic motivation stemming from the compositional interaction of intensional operators and indexicals27 • a pragmatic motivation stemming from the notion of assertoric content (“what is said”) and its broader role in communication The framework developed in Lewis (1980) shares many structural features with Kaplan’s picture, but Lewis insists that the two-step procedure isn’t theoretically motivated—he contends that an equally good option is just to evaluate at both a context and index in one step. Lewis emphasizes that a theory of the first sort can be easily converted into one of the second and vice versa simply by currying or un-currying the functions. The disagreement on this point between Kaplan and Lewis stems from their differing views on the role of assertoric content in the semantic theory (more on this below). But in spite of this internal disagreement it is very common for theorists to adopt a framework that relativizes extension to two main parameters—a context and an index—where the context includes the parameterization required to handle context sensitivity in the language and the index includes the parameterization required for intensional displacement. This general approach can be extended to context-sensitivity in language more generally. Its fairly easy to see how the treatment of “I” from above could be extended to cover “she”, “he”, “now”, “here”, and “that”.28 And further it has been extended to the context-sensitivity involved with gradable adjectives, quantifier domains, modals and conditionals, and perspective dependent expressions (e.g. “local”, “tasty”). 26 Cf. Rabern (2012–2013). This is what motivates double- or multiple-indexing; see Kamp (1967, 1971); Cresswell (1990). 28 Although it isn’t clear how the strategy above handles multiple occurrences of a demonstrative in a sentence such as “that is bigger than that”. See Pickel, Rabern, and Dever’s Chapter 5 in this volume for discussion. 27 OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern i... discourse context and dynamic semantics The parameterization strategy can model the ways in which the truth conditions of an utterance depend on various features of the extra-linguistic context, but further complications arise due to the ways in which the semantic features of an utterance depend on the conversational context. Consider an utterance of the following pair of sentences: (9) A woman walked into the room. She was wearing a black velvet hat. It is plausible that this utterance has a reading on which it is true just in case: (10) ∃x(x is a woman and x walked into the room and x was wearing a black velvet hat). It is challenging to formulate a semantics that predicts these truth conditions in a systematic way, on the assumption that each sentence expresses a proposition, or has truth-conditional content. What are the truth conditions of (11) (as it occurs in discourse (9))? (11) She was wearing a black velvet hat. “She” doesn’t look like a bound pronoun: since there is no quantifier in the sentence, it is unclear how it could be, and unclear what proposition would be expressed if it were. But nor does it seem that “She” is functioning as a deictic pronoun, picking out a particular woman (Jane, say), so that (11) is true just in case Jane was wearing a black velvet hat; that would result in truth conditions for (9) on which the utterance of (9) could be true even if Jane is not the woman who walked in: (12) ∃x(x is a woman and x walked into the room and Jane was wearing a black velvet hat). Moreover, the idea that “She” is a deictic pronoun—and more generally, the idea that each sentence of (9) expresses a proposition—suggests that there should be no difference between (9) and (13): (13) She was wearing a black velvet hat. A woman walked into the room. But there is a clear difference: (13) has no reading on which its truth conditions are (10), and instead seems to suggest that the woman wearing the hat is not the woman who walked in. There have been several attempts to account for these data while maintaining the idea that semantics is fundamentally in the business of assigning truth condtions to sentences: perhaps “She” as it occurs in (9) is semantically equivalent to some description like “The woman who walked into the room” (Evans 1977; Neale 1990; Heim and Kratzer 1998; Elbourne 2005), or perhaps we can give a pragmatic explanation of why (9) (but not (13)) can appear to have truth conditions like (10) (Lewis, 2014). But OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  many have seen examples like (9) as motivating a radical shift away from a focus on the truth conditions of sentences, and towards a family of views known as dynamic semantics. There is a wide variety of dynamic views, and exactly what distinguishes dynamic semantic theories from so-called static alternatives is a matter of some dispute.29 There may be more than one interesting distinction in the area, and in any case defending a precise characterization of the static/dynamic distinction would take us beyond the scope of this Introduction.30 We will focus on several features that dynamic views tend to share, and it will be useful to begin by introducing a clear example of a static system. Consider a Stalnakerian (e.g. Stalnaker 1978) picture of discourse: participants in a conversation are jointly inquiring into how the world is. They are taking some things for granted in their inquiry, but these presuppositions leave open a number of possibilities as to how things are. We represent these open possibilities by a set of possible worlds: the context set. Propositions are the semantic values of sentences, and these too are represented by sets of worlds. The essential effect of an assertion is to remove worlds incompatible with what is asserted from the context set, so that the context set after an assertion is made is the intersection of the prior context set with the proposition asserted. Notice the following features of this Stalnakerian picture: (a) Uses or occurrences of individual sentences are the primary bearers of truth and falsity (b) Sentences are associated with propositions (i.e. truth conditions) (c) There is a clear distinction between semantics (the theory of semantic values of sentences) and pragmatics (the theory of what the semantic values of sentences do) Dynamic views typically do not have these features. To see why we might want to give them up, consider again (11) as it occurs in (9). What are the truth conditions of (11)? Once we have rejected the idea that “She” as it occurs in (11) is not functioning as a deictic pronoun—and in particular, once we have accepted the idea that “She” is in some sense bound by “A woman” in the previous sentence—it is very unclear what 29 The loci classici of dynamic semantics are Kamp (1981) and Heim (1982). Kamp’s system— Discourse Representation Theory, or DRT—departs in certain important respects from the kind of Montagovian/model-theoretic semantics on which we are focusing; in particular, DRT provides an algorithm for constructing representations—discourse representation structures, or DRSs—which is intended as “an (idealised) analysis of the process whereby the recipient of an utterance comes to grasp the thoughts that the utterance contains” (Kamp and Reyle, 1993: 8). DRT thus has important points of contact with the Chomskian views discussed in section I.2, especially those (like Jackendoff 1990) that see semantics as providing a theory of how language relates to mental representations. Unfortunately, we cannot do full justice to DRT here; instead we (with much regret) follow the tradition of several recent philosophical discussions of dynamic semantics in relegating it to a footnote. Kamp and Reyle (1993) is a standard textbook introduction to DRT. 30 For general discussion, see Yalcin (2012) and Lewis (forthcoming). For attempts to give a formal account of the distinction, see van Benthem (1986); Rothschild and Yalcin (2017). See Rothschild and Yalcin (2017) for a discussion of various senses in which a semantics might be thought of as dynamic. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern the truth conditions of (11) could be, and moreover whether we should say that the sentence in isolation has truth conditions at all. If we admit cross-sentential binding relationships, it seems that we are forced to say: (a′ ) Discourses, not sentences, are the primary bearers of truth and falsity Of course, it is hard to make sense of this claim if one is wedded to the idea that sentences express propositions. But what can semantics associate sentences with if not propositions? The Stalnakerian picture is a model of how assertions change the context. But the dynamic semanticist observes that if we are primarily interested in the effect an assertion has on context, we can model this in a simpler way, by giving up the idea that assertions express propositions, and letting the semantic value of a sentence be its potential to change the context, which we represent as a function from contexts to contexts: (b′ ) Sentences are associated with context-change potentials rather than truth conditions It is easy to see how to transform our simple Stalnakerian system into a system that assigns sentences context-change potentials. For any sentence φ, our Stalnakerian system will take some possible-worlds proposition !φ "S to be the semantic value of φ; and the result of asserting !φ "S in a context c will be c ∩ !φ "S . Thus the context change potential of φ is that function that maps a proposition c to the intersection of c and !φ "S : in other words, λc.c ∩ !φ "S . Of course, thus far there is very little interest in this; the shift to context change potentials will be interesting only to the extent that it can do something useful that a static system cannot—or at least, only to the extent that it can do something useful simply and elegantly, that a static system can only do by getting complicated and ugly. It remains controversial whether we need dynamic semantics, and if so, what for; but to get an idea of the kind of thing that dynamic semanticists have thought that they have a special ability to do, let’s return to our example (9). In order to handle crosssentential anaphors of the sort that (9) appears to exhibit, we will need to supplement our notion of context.31 Let a context be a set of ordered pairs ⟨g, w⟩, where g is an assignment function and w is a possible world. To a first approximation, we can think of each pair as representing a way the world and our conversation might be. (The assignment function represents a way our conversation might be insofar as it 31 It is a feature of most interesting dynamic systems that context becomes something more sophisticated than a set of worlds. But of course this is not distinctive to dynamic semantics; semanticists who eschew context-change potential and insist that individual sentences be truth-evaluable may nonetheless opt for a sophisticated notion of context. And it is a good question whether by so doing they can emulate the dynamic semanticist’s results. See Dever (2006) and Lewis (2014) for discussion, and see Crespo, Karawani, and Veltman (Chapter 9, this volume) for an example of the work dynamic semantics can do with a supplemented notion of context. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  represents who we are talking about.) Then we can let the semantic value of a sentence like (11) be:32 (14) ! She1 is wearing a hat" = λc.{⟨g, w⟩ : ⟨g, w⟩ ∈ c and g(1) is wearing a hat in w}. So far, this differs little from the Stalnakerian system: context is still represented as a set of elements, and asserting (11) will (at most) remove some elements from the set. The crucial difference comes in the existential quantifier. Existentially quantified sentences manipulate the assignment, so that the assignment parameters of the members of the output context can be different from the assignment parameters of the members of the input context. Writing “g[1]h” for “g differs from h at most in what it assigns to 1”: (15) !∃x1 φ " = λc.{⟨g, w⟩ : ∃h.⟨h, w⟩ ∈ c and g[1]h and ⟨g, w⟩ ∈ φ}. In particular, assuming that the indefinite article functions as an existential quantifier (and fudging the compositional details), we will have something like: (16) !A woman1 walked in" = λc.{⟨g, w⟩ : ∃h.⟨h, w⟩ ∈ c and g[1]h and in w, g(1) is a woman that walked in}. And this can explain why the “A woman” in the first sentence of (9) can bind “She” in the second sentence: “A woman” shifts the assignment elements of the context, and this persists beyond the boundaries of a single sentence. Importantly, given semantic values like (15) and (16), utterances of existentially quantified sentences do not simply remove elements from the context set: our output context can contain elements that our input context did not. Update is no longer simply intersection. This lets us predict a difference between (9) and (13): in (9) (but not (13)), (11) is uttered in a context in which all relevant assignments assign a woman who walked in to 1, and this ensures that an utterance of (9) requires the same woman who walked in to be wearing a hat. So it looks like (at least) this simple dynamic semantics does some useful work that a static semantics cannot (straightforwardly) do.33 In our Stalnakerian system, semantics assigned propositions to sentences, and it is a further question—a matter of pragmatics rather than semantics—what those propositions do, and in particular how they affect the context. Dynamic semantics carves the pie rather differently: on the dynamic view, semantics gives an account of how sentences effect context. So one way of seeing the difference between dynamic 32 The following is very loosely adapted from Groenendijk and Stokhof (1991). The story here is incomplete and oversimple in certain respects, for example we’d need to say something about negation (i.e. “A woman didn’t walk in”). Of course these issues have all be worked out already in Heim (1982) and Groenendijk and Stokhof (1991). See Yalcin (2012) for a helpful summary. 33 OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern and static views is that dynamic views push some of the work that static views see as pragmatic onto semantics: (c′ ) Context change is a matter of semantics rather than pragmatics What could decide between static and dynamic views on this way of seeing the dispute? That will depend on how we understand the distinction between semantics and pragmatics. Lewis suggests that the crucial distinction is that semantics studies matters of linguistic convention, while (roughly following Grice) pragmatics studies matters that follow from the seeming fact that linguistic exchange is a cooperative activity between rational agents. On this way of seeing things, the static semanticist must show that the phenomena that dynamic semanticists purport to explain can be explained in a broadly Gricean way, while the dynamic semanticist is claiming that these phenomena are a matter of specific conventions rather than consequences of general facts about rationality and cooperation. We began this Introduction with the idea that semantics is in the business of saying something about truth conditions. There is a sense in which dynamic semantics— in its endorsement of (b′ )—gives this up. But two points should be made. First, we have already noted that dynamic semanticists may well be interested in the truth of discourses, and in fact there are a variety of possible notions that one might see as playing the role of truth and truth conditions in dynamic semantics. (On one standard view, a sentence is true relative to a context just in case the result of updating the context with the sentence is non-empty (Heim, 1982).) So it is hardly as though truth has been abandoned entirely. Second, there is a clear continuity between the kind of dynamic views that we are focusing on and standard Montagovian, modeltheoretic approaches to semantics: both are in the business of assigning entities as the semantic values of expressions (though our presentation has largely ignored this), of building up the semantic values of complex expressions from the semantic values of their components in a broadly compositional way, and of explaining various features of language on the basis of these semantic values. As Seth Yalcin puts the point: Lewis has famously written that “Semantics with no treatment of truth-conditions is not semantics” ([Lewis, 1970, 18]). What he meant to underscore in saying this was that a semantics should associate expressions with interpretations as opposed to mere translations. It is worth emphasizing that in this respect, dynamic semantics and truth-conditional semantics are on the same side of the fence: They are each card-carrying members of the interpretive (representational, model-theoretic) tradition in semantics. (Yalcin 2012: 257) I.1.4 Reference and propositions The mainstream of semantics in linguistics is in the model-theoretic and interpretive tradition of Lewis and Montague that we have outlined. But a rift has emerged between theorists in this tradition and a certain philosophical approach to semantic theorizing. A dominant philosophical approach to semantic theorizing takes inspiration from OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  themes stemming from David Kaplan, Keith Donnellen, and Saul Kripke, among others, where reference, singular propositions, and “what is said” take centre stage. On this approach there is a preoccupation with names and definite descriptions, modality, that-clauses, the nature of propositions, and attitude reports. Of course, the division between these branches—both of which in many ways emanated from UCLA in the 1970s—is not as clear as this suggests, since there has been cross-fertilization and collaboration between them from the start. Whether these strands are in direct conflict or whether they are simply different projects with different theoretical aims is often indeterminate. But there is at least a noteworthy divide in terms of the explanatory roles of “reference” and “propositions” in semantic theory. First consider the role of reference in semantic theory. One might think that the notion of reference ought to be central to semantic theory. Reference is, after all, the relation between word and world, and many insist that semantics is first and foremost concerned with the relation between expressions of the language and the things in the world they refer to or are about. Now it has been questioned whether reference is even a genuine, substantive relation worthy of theoretical investigation. For example, Quine argued that there is no fact of the matter what our expressions refer to (Quine, 1960), and Chomsky doubts that there is a systematic and scientifically respectable relation between expressions and “things in the world” (Chomsky 2000c: 37).34 Others might take a deflationary attitude and insist that all there is to say about reference is that instances of the schema [“α” refers to α] are true. Yet, even if one takes a more positive stance on reference one might nevertheless still question its centrality to semantic theory. That is, one might agree that there is a pre-theoretic notion of reference that is a subject worthy of serious philosophical study, but question whether to not there is a theoretical role for the common sense notion of “reference” in natural language semantics.35 The centrality of reference to semantic theory can be motivated via the centrality of truth. Since truth is central and the truth value of a declarative sentence seems to depend on the referents of its parts, it seems that reference must also be central. But while it is right that many semantic theories, for various fragments of natural language, appeal to reference (e.g. Davidsonian T-theories), its not correct that, in general, the truth value of a sentence depends on the referents of its parts. Instead the compositional semantics of natural languages appeal to the intensions of singular terms or some more complicated function, instead of appealing to the referent. Of course, a common reaction to such alleged counterexamples is to appeal to a 34 For more on Chomsky’s view of the matter, and his argument in the famous “London” passage, see discussion in Yalcin (2014: §7) and Stoljar (2015). 35 Of course, one might think that reference just is a basic semantic datum, obviously the sort of thing that a theory should explain if it is to be properly described as a semantic theory. There is no point in entering into a verbal dispute over whether a theory of reference is really semantic. Our primary concern here is whether a notion of reference has a theoretical role to play in the type of formal semantic project for natural language we outlined above. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern mechanism of referential shift. Any apparent counterexamples to the compositionality of reference can be construed as a case where the relevant expressions “refer” to some complicated function required for the compositional semantics. But even granting that this sort of occurrence-based semantics is workable, the key notion of “reference” employed looks suspiciously like the standard notion of semantic value. If “reference” is just another name for the technical notion of semantic value, then of course it plays a key role in semantics. But this is a trivial position, and one that promotes bad terminology. When philosophers insist that reference is central to semantics they have in mind a substantial notion of reference, where the referent of a proper name such as “Hazel” is a particular individual (not a set of parameter-individual pairs, or some set of sets of individuals, or whatever). For example, some insist that the semantic content of sentences that contain “directly referential” terms express propositions that somehow essentially involve the referents of the directly referential terms. A number of philosophers in this tradition have maintained that Kripkean arguments establish a direct reference view of the semantics of proper names (and other terms such as indexicals). Direct reference is often construed as a thesis concerning the semantic value of names: The semantic value of a proper name is just its referent. But it is not clear that the view should be taken in this way. Theorists who advocate direct reference or Millianism are not primarily motivated by issues stemming from compositional semantics per se, instead they are concerned with the “semantic content” of an expression in the sense of its “contribution to a proposition”. To illustrate this consider the following sentences. (17) Hazel ran. (18) Hazel and a dog ran. It seems clear that the name “Hazel” has the same reference when it occurs in the simple subject-predicate sentence (17) and when it occurs in (18) conjoined with a quantifier. Yet a plausible view about noun phrase coordination has it that to conjoin the semantic value of “Hazel” with the value of “a dog” they have to be the same semantic type. Thus, it seems that we ought to take the semantic value of “Hazel” to be the same type as a generalized quantifier. On this proposal the semantic value of a name such as “Hazel” is not the referent of “Hazel” as we said above, instead it is that function that takes a predicate meaning f to the true if and only if f maps Hazel to the true (see Lewis 1970; Montague 1973). !Hazel"w = λf⟨e,t⟩ . f (Hazel) It may seem that this proposal is in tension with direct reference, since the semantic value is not simply the referent (see Armstrong and Stanley 2011 and King 2015). But the proposal to treat names as generalized quantifiers needn’t be construed as being in tension with the thesis of direct reference (or Millianism). Direct reference is a OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  thesis about the corresponding constituent of a name in the structured proposition asserted by uses of sentences which contain the name. This issue is separable from the question over the compositional semantic value of a singular term.36 Thus one can agree that the semantic value of “Hazel” is not its referent—while still insisting that occurrences of “Hazel” refer to Hazel, and do so “directly” in the sense that they contribute that referent to singular propositions expressed by utterances of both (17) and (18).37 While the thesis of direct reference makes sense in certain theoretical contexts it’s not clear whether it makes sense in the context of the model-theoretic compositional semantics outlined above. This is not to say that the thesis of direct reference is false—perhaps it is best understood as a thesis embedded in a different project with different theoretical aims. There is a similar divide concerning the role of “what is said” in semantic theory. A traditional picture is that the semantic values of sentences are propositions, where propositions are understood to be the objects of our cognitive attitudes. Propositions are things we believe, know, and assert. Propositional attitude reports such as “Ansel believes that Hazel runs” are true if and only if the agent of the report stands in a certain relation to the proposition expressed by the complement clause of the report. In general, it is said that sentential operators “operate” on the proposition expressed by their embedded sentence. For example, a sentence such as “Hazel might run” is true if and only if the proposition expressed by “Hazel runs” is true in an accessible world. Lewis (1980) takes issue with this traditional picture, in terms of a disagreement with Kaplan on the role of assertoric content in semantic theory. For Kaplan the content of an expression is compositional: “the Content of the whole is a function of the Content of the parts” (Kaplan 1989a: 507). And he understands the content of a sentence to be the object of various sentential operators. Thus, contents are constrained depending on the operators of the language, to be the right semantic type to enter into compositional relations with those operators. It is for these reasons, that Kaplan is led to endorse temporalism about propositions—the view that propositions can vary in truth value across times. Kaplan insists that contents cannot be specific with respect to time, since if they were this would give the wrong result for the compositional semantics of temporal operators (see Kaplan 1989a: 503–4). For example, consider a sentence in the present tense and past tense: (19) Hazel runs. (20) Hazel ran. 36 See Rabern (2012: 89). See Pickel (forthcoming) for a view on how structured propositions might fit within a compositional semantics—the key is that the compositional semantic value of an expression may differ from its propositional constituent. 37 In a similar vein, Lewis states: “There is, of course, no reason to not to say both that my name has me as its referent and also that it has a certain property bundle as its semantic value” (Lewis 1986: 41–2, footnote 31). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern A common strategy, historically, for analysing the past tense sentence (20) is to construe it as built up from the past tense operator “PAST” applied to the present tense sentence (19). Since, “PAST” is an operator that maps sets of times to sets of times, its argument, which is the semantic value of (19) must be a set of times. And it can’t be the set of all times nor the empty set or else “PAST” would map the value of any co-extensional sentences at a time to the same values, which would render “PAST” a truth-functional operator. Kaplan concludes from this that “what is said”, in other words the content of an assertion, is the type of thing that can vary in truth value across times. Lewis didn’t build into his semantic framework an identification between assertoric content and sets of indices (i.e. semantic values in a context), so while Lewis does take such consideration from compositionality to yield a conclusion about the semantic values of sentences, he explicitly doesn’t take it to yield a conclusion about propositional content. For Lewis, assertoric content is a post-semantic notion. He concedes that “we can assign propositional content to sentences in context” and that “propositions have an independent interest as suitable objects for attitudes such as belief, and [illocutionary acts]” (p. 37), but he doesn’t identify content with semantic values (in context). Lewis doesn’t equate sets of indices with propositional content because he doubts that one type of semantic entity can play both the role set out for content and the role set out of semantic value. In particular, he worries that the parameters that will be required in the indices to provide an adequate compositional semantics might result in sets of indices that are unfit to play the content role.38 Lewis concludes: It would be a convenience, nothing more, if we could take the propositional content of a sentence in a context as its semantic value. But we cannot. The propositional contents of sentences do not obey the composition principle, therefore they are not semantic values. (Lewis 1980: 39) One can separate out the purely truth-conditional and compositional project from a more robust project of assigning propositional contents to utterances. This is especially salient if the contents of utterance are construed, as Kaplan often does, in terms of Russellian structured propositions, where the assertoric content of “Hazel runs” is a structure consisting of Hazel and the property of running: ⟨Hazel, running⟩. Many theorists follow Kaplan (and Russell) in this regard motivated by their views on that nature of content, for example Salmon (1986); Soames (1987); and King (2007). These views, which we might call propositional semantics, often proceed in two steps (see, e.g., Appendix C of Salmon 1986 or Soames 1987): (i) a recursive assignment of structured contents to every expression of the language (at a context); 38 Thus Lewis endorses what Dummett called the “ingredient sense”/“assertoric content” distinction (see Dummett 1973: 447; see also Evans (1979: 177), Davies and Humberstone (1980: 17–26), Stanley (1997); as well as more recent discussion in Ninan (2010), Rabern (2012), Yalcin (2014), and Rabern (2017); and in Stalnaker’s and Recanati’s Chapter 3 and 4, respectively, in this volume. A strong case for pulling apart propositional content and semantic value is the argument from variable binding (Rabern 2012, 2013). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  and (ii) a recursive definition of the truth-value of a structured content with respect to a circumstance (i.e. a world and a time). These together provide a recursive definition of sentential truth at a point of reference.39 It’s not clear that the structured entities used by such theories can play the role of semantic values in a truth-conditional compositional semantics; among other things, such views struggle with quantification and variable binding. But even if that is so, we needn’t necessarily conclude that these views are misguided, since, again, these views are embedded within a different theoretical project with different constraints and different theoretical aims. A debate between propositional semantics and views that don’t make use of structured propositions must turn to the background metatheoretical issues to make progress. I.2 Metatheoretical Perspectives In a well-known paper, Fodor (1985) contrasts what he (with a characteristic joke) calls “the Right View” of linguistics—a broadly Chomskian view that includes the claims that grammars are internally represented by speakers, that “linguistics is embedded in psychology (it offers a partial theory of the capacities and behaviours of speaker/hearers)” (1985: 150), and that there are no a priori constraints on the evidence that may be brought to bear on linguistic theorizing—with what he calls “the Wrong View”, which maintains that linguistic theories must explain data about corpora, or about linguistic intuitions (so that it is ruled out a priori that other kinds of evidence are relevant), and (at least in the version discussed most explicitly) “linguistics is a part of mathematics” (1985: 158). Fodor suggests that these two perspectives are mutually exclusive and exhaustive (1985: 147). But it seems clear both that there are several in principle separable strands in each of the positions Fodor discusses—why couldn’t someone maintain that linguistics is a part of psychology, but deny that grammars are internally represented? Or that it is an a posteriori truth that linguistics is in the business of explaining data about corpora?—and that there are also views in the literature that his proposal neglects. For example, there is no room in Fodor’s scheme for the view that linguistic theorizing must describe the social conventions that underlie linguistic communication. Metatheoretical perspectives on semantics often combine views on a number of issues in just this way, so that it can be unclear exactly where two theorists disagree. In this section, we lay out five major issues about semantics: Explananda What is semantic theorizing meant to explain? Realism Does semantic theorizing aim to state facts, or is it (for example) merely an instrument? 39 There are other views on meaning that don’t proceed in this manner, for example the structured meanings of Lewis (1970). For a discussion of various strategies for fine-grained propositions—the structured propositions strategy compared to the strategy of employing impossible worlds—see Ripley (2012) and Bjerring and Schwarz (2017). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Metaphysics What is the nature of languages and other entities discussed by semantic theories? The actual language relation What is the relationship between languages and speakers? Epistemology and methodology How can we know what is the right semantic theory for a language? We then briefly discuss how views on these issues can be combined. I.2.1 Explananda What makes physics different from biology? There are, of course, numerous differences between the two disciplines—methodological, institutional, historical—but it seems clear that an especially central difference is in the sorts of facts that they aim to describe and explain. If no theory in biology can explain (for example) why the weak nuclear force has a shorter range than electromagnetic force, that is not a problem for biology: questions about these forces are not part of biology’s subject matter. But if (as creationists sometimes claim) biological theory cannot explain how the eye evolved, biologists would be seriously concerned: such questions are core parts of biology’s subject matter. Of course, disciplinary boundaries are to some extent artificial, and in any case are rarely sharp; theories in physics may bear on theories in biology (and vice-versa) in any number of ways. Moreover (as Fodor emphasizes in his defence of the so-called “Right View”), it may not be obvious prior to substantive theorizing whether certain questions are (say) biological or not. (It could have turned out (e.g.) that the weak nuclear force is a result of a biological process.) It can be reasonable to wonder whether a given question or phenomenon is biological, and it makes sense to ask, “What facts do biological theories need to explain?” Of course, there is a trivial way to answer the question: biology explains biological facts, just as physics explains physical facts, and so on. But one can also wonder whether biological theory should explain certain facts, where these facts are not explicitly described as biological. And this question is often substantive—answering it may require coming to grips with empirical facts (should biology explain facts about the distribution of vital forces? Should it describe evolutionary facts? That depends in part on whether there are any vital forces or evolutionary facts), as well as with current theoretical practice, and with normative evaluations about how that practice may best be developed. What, then, is the subject matter of semantics? What sorts of facts is semantic theory primarily designed to explain? There are at least three kinds of answer to these questions in the literature. (These kinds of answer are not mutually exclusive; perhaps most theorists will endorse some element of each kind.) i... semantic facts On one view, a semantic theory must explain various distinctively semantic facts. Perhaps the most obvious candidates are facts about meanings. For example, it is OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  plausible that the English sentence “Snow is white” means that snow is white, and an attractive first thought is that semantic theory must explain facts of this sort. Larson and Segal write, “Clearly, facts of this kind represent primary data that we would want any semantic theory to account for” (Larson and Segal 1995: 2). Others are less sanguine about founding formal semantics on a pre-theoretic notion of meaning; our pre-theoretic thought about meaning involves various elements that most theorists would regard as pragmatic or psychological, and may also involve dubious metaphysical commitments (e.g. Davidson 1967). Another strategy is to focus on particular strands in our pre-theoretical thinking about meaning: perhaps most commonly, on relations between expressions and the extra-linguistic world. It is plausible that (at least some) sentences represent the world as being a certain way, and are true or false depending on whether the world is that way; emphasis on this fact suggests that semantics must explain the truth conditions of sentences (Davidson (1967); (Lewis (1970: 18), Heim and Kratzer (1998: 1–2), among many others). Others claim that semantics must give more information than truth conditions, for example, by associating sentences with structured propositions (Soames 2010: ch. 5). Still others may wish to emphasize the representational features of non-sentential expressions: for example, as we mentioned, it might be held that a semantic theory must give an account of the reference of referring terms such as names, and the relation that holds between predicates and the properties they express (Larson and Segal 1995: 5). A number of semantic facts beyond truth and reference have been cited as among the core subject matter of semantics. (Some may see these as grounded in more fundamental facts (e.g. facts about truth conditions), but this is a substantial theoretical commitment; even those who (following Chomsky, e.g. 2000a) eschew theorizing about reference and truth (e.g. Pietroski 2003) may maintain that some or all of the following sorts of facts are part of the subject matter of semantics.) Katz (1990) mentions the relations of synonymy and antonymy, similarity in meaning, redundancy, ambiguity, meaningfulness and meaninglessness, and analyticity; Larson and Segal (1995: 3–4) add anomaly, logicosemantic relations (contradiction and implication), and thematic relations, but reject the idea that ambiguity is a semantic feature (1995: 45). Holliday and Icard’s Chapter 2 in this volume emphasizes broadly logical facts, for example regarding consistency and entailment, and suggest that axiomatization is a neglected tool for characterizing how semantic theories bear on facts of this kind. i... productivity The second sort of fact that has been proposed as among the core explananda of semantics, already discussed in section I.1.2, is the fact that natural languages are productive: speakers—despite the fact that they are finite creatures, limited with respect to their experiences, memories, computational power, and so forth—can produce and understand infinitely many new sentences. Chomsky, in a similar vein to Frege (1923), writes, “The central fact to which any significant linguistic theory must OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern address itself is this: a mature speaker can produce a new sentence of his language on the appropriate occasion, and other speakers can understand it immediately, though it is equally new to them” (Chomsky 1964: 7). Likewise, Zimmermann and Sternefield spell out the “Fundamental research question in semantics” as follows: “how come that we can understand arbitrarily long sentences we have never enountered before, and, in particular, how come that we can tell whether or not they make sense (i.e. are semantically well-formed)?” (Zimmermann and Sternefeld 2013: 4). Facts about productivity are typically explained by appeal to compositionality (see section I.1.2). i... use The final sort of fact that many theorists have seen as among the core explananda of semantics is facts about use. We can see this as encompassing a broad range of views. On a relatively conservative interpretation, sentences are typically used to make assertions, and semantic theory (perhaps in concert with, for example, pragmatic (e.g. Stalnaker 1970, 1978) and postsemantic (e.g. MacFarlane 2014: ch. 3) theories) is responsible for explaining this use. On this view, a truth-conditional theory of meaning may be perfectly well-suited to giving an account of use (see e.g. Lewis 1983). But the idea that meaning is use is more typically associated with Wittgenstein’s rejection of the centrality of assertion, and with it the utility of truth conditions, propositions, etc. Theories in this school may see as among the primary explananda of semantics facts about justification and assertability (Dummett 1993; Wright 1993: part III), inference and reason-giving (Brandom, 1994), the acceptance and rejection of sentences (Horwich, 2005), or speech acts (Alston, 2000). See Recanati’s Chapter 4 in this volume for discussion of the relation between semantic theory, and theories of thought and communicaiton. I.2.2 Realism and anti-realism in semantics Philosophers of science distinguish two broad families of attitudes one may take towards theories in a given discipline. The realist about a discipline takes a generally “epistemically positive attitude” (Chakravartty 2014: §1.1) towards the theories of that discipline, typically including at least some of the following doctrines (see Leplin 1984: 1–2 and Psillos 1999: xix for similar lists): • Theories of the discipline are truth-evaluable • The theories are at least approximately true • The theories are made true by a mind-independent, theory-independent domain of facts • The theoretical terms of the theory refer • The theories of the discipline aim at truth (so that they are successful only if they are true) OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Anti-realists reject some or all of these claims; for example, they may hold that theories are mere instruments (hence not truth-evaluable) (e.g. Duhem 1981), or that the statements of the theory are true, but only insofar as they express non-cognitive states of mind (e.g. Gibbard 2003), or that the discipline aims at theories that are observationally adequate (so that a theory can be entirely successful even if it makes false claims about the unobservable; e.g. van Fraassen 1980). Despite a recent history of robust debate (e.g. Dummett 1993), probably the majority of contemporary philosophical work on formal semantics is broadly realist. Many maintain that semantic facts are psychological, and hence reject the idea that facts about meaning are mind-independent, but typical developments of this style of view still have a realist flavour, maintaining (for example) that semantic theories are true in virtue of correspondence to a theory-independent domain of psychological fact. But views with anti-realist elements of various sorts have also been defended; what follows is a far from exhaustive list of some representative examples. i... ludwig’s interpretativism Davidson famously suggests that a semantic theory should be a theory that “explicitly states something knowledge of which would suffice for interpreting utterances of speakers of the language to which it applies” (Davidson 1984c: 171; see also Foster (1976)) and Davidson (1984b). And Davidson’s proposal was that a Tarski-style theory of truth for a language would do an important part of this job. Thus many semantic theorists have seen their primary job as providing such a truth theory. But there is some controversy about exactly what role a theory of truth should play. Ludwig requires that a semantic theory be able to prove theorems of the form: “f means in L that p where what replaces “p” translate in the language of the theory (the metalanguage) the sentence in L (the object language) denoted by what replaces f.” He sees a truth theory as playing a crucial instrumental role: if we have a truth theory for L (that meets certain further constraints), then we can know that certain specifiable theorems of the form f is true in L iff p (T-theorems) correspond to truths of the desired form f means in L that p (so that if we can prove a T-theorem, we can use it to derive a corresponding truth about meaning). Crucially, Ludwig does not assume that the T-theorems must be true. On the contrary, he sees the idea that T-theorems must be true as leading to paradox. Consider, for example, a liar sentence: L (L) is not true. Our theory will let us prove: L-T “(L) is not true” is true iff (L) is not true. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern But we had better not take (L-T) to be true: if we do, then we are mired in paradox. Ludwig’s idea is that we merely use (L-T) to infer a corresponding claim about meaning: L-M “(L) is not true” means in English that (L) is not true. Though Ludwig takes claims like (L-M) to be true, his view has a strong instrumentalist element: we do not maintain that the truth-theory, which is the compositional element of his theory, is true—it is a mere instrument for deriving claims about meaning. i... stich’s mentalism Stich (1985) maintains that linguistic theorizing should produce a grammar that “correctly captures the intuitions of the speakers of the language” (1985: 134). But, Stich claims, there are many such grammars. Stich adopts a Chomskian emphasis on language acquisition, and shares the hypothesis that a model of language acquisition must be driven by considerations of linguistic universals—features common to all humanly possible languages. But he rejects the Chomskian aim of finding the grammar that is internally represented by speakers. Instead, he describes a procedure by which a theorist can begin with one of the many descriptively adequate grammars for a given language, hypothesize that all of its features are universal, and gradually refine that view. Stich admits that the results of this procedure will depend substantially on the initial grammar chosen, and that this choice is more or less arbitrary. So on Stich’s view, two different linguists might arrive at different theories depending on the grammar with which they began, and their two theories might be equally adequate. This suggests either that we should not regard the adequacy of these theories as a matter of truth and falsity, or that their truth or falsity is not just a matter of correspondence to theory-independent facts. i... gibbard’s noncognitivist normativism Arguments attributed to Wittgenstein (1953), made prominent in the contemporary literature by Kripke (1982), purport to show that meaning is normative. One way of understanding this idea is that claims about meaning entail claims about what one ought to do: for example, it is prima facie plausible that if one uses “plus” to mean plus, then (assuming that one also uses “two”, “four”, and “equals” in standard ways, and uses standard English syntax) one ought to accept sentences like “Two plus two equals four”. Alan Gibbard Gibbard (2012) agrees that someone who makes a claim about meaning is thereby committed to certain “ought” claims. But he develops this view in the context of an expressivist view of normative language. According to Gibbard, normative language—including assertions about “meaning”—expresses a state of mind. There are no facts—states of affairs in the world—that correspond to true “ought” claims. The same goes, on Gibbard’s view, for “means” claims. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  I.2.3 Metaphysics Set aside, for now, anti-realist views that deny that there are semantic properties and relations of the sort discussed above (such as synonymy, anomaly, entailment, etc.) Let’s use the term language stipulatively (though consistently with much of the literature) to pick out the thing (or collection of the things) that instantiates semantic properties and relations. What is a language? What sort of thing is (or has parts that are) meaningful, has truth conditions, is synonymous, is ambiguous, and so forth? Theorists have advanced a wide range of answers to this question. (See Ball’s Chapter 14 in this volume for discussion of the significance of this fact.) On a popular view associated with Chomsky, languages (in the sense relevant to scientific linguistics) are states of a particular mental “faculty” or “organ” (e.g. Chomsky 2000a: 168)), which is a distinctive part of the innate endowment of the human mind. (Relatedly, it is sometimes claimed that semantics studies a mental module (Borg 2004; in the sense of Fodor, 1983), though Chomsky denies that the language faculty is a module in this sense (Chomsky 2000b: 117–18), and would be particularly sceptical of the idea of a semantic module.) Chomsky contrasts this notion of language, which he calls I-language, with the commonsensical view that languages are social entities— for example “the English language” shared by speakers in a community—a notion which Chomsky calls E-language, and regards as too vague and politicized to be an object of serious scientific inquiry. But other theorists have insisted that studying something closer to E-language is possible; see, for example, the discussion of Lewis on convention below. Still others have agreed that linguistics is psychology, while disagreeing with other aspects of Chomsky’s view, for example his claim that the study of language must be “narrow” or “internalistic”, and must therefore eschew notions such as reference (Burge 2007; Ludlow 2011). Glanzberg’s Chapter 7 in this volume defends an internalistic view, while Partee’s Chapter 6, this volume, defends the idea that the idea that semantics studies the mind is compatible with externalism. Schwarz’s Chapter 13 in this volume defends an alternative way of thinking of the relation between semantics and psychology. According to the platonist, languages are abstract objects. This leaves room for substantial disagreement about the nature of the objects studied. On one view, associated with Montague and the tradition of model theoretic semantics, a language is an interpreted formal system: a syntactic theory that gives a recursive specification of the well-formed expressions of the system, and a model-theoretic interpretation. This view makes languages mathematical entities. An alternative view is that languages are sui generis abstracta Katz (1981) and Soames (1985). A stark alternative to the idea that semantics studies abstracta is the nominalist idea that semantics studies physical tokens (sounds, marks) produced by speakers. This view was popular in the early twentieth century, but largely fell out of favour as Chomsky’s mentalistic alternative came to the fore (see Katz 1981: ch. 1 for discussion). But in the recent literature, Devitt (2006) defends a related view: according OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern to Devitt, linguistics (including semantics) studies the concrete tokens which are the outputs of linguistic competence. There are a number of further debates about the metaphysics of semantic theorizing. Perhaps the most important of these are about the significance and nature of semantic values. What kind of things are semantic values: ordinary objects and properties, psychological entities such as concepts, mathematical entities such as functions, or something else? What are the relations between the semantic values of complex expressions and the semantic values of their parts? Must every meaningful expression have a semantic value? I.2.4 The actual language relation Each of us speaks and understands some languages and not others. Lewis (1969: 177) calls the relation that obtains between a language L and a person or group of people when L is their own language the actual language relation. Suppose we adopt a particular view of the metaphysics of language. What relation obtains between a speaker and a language she speaks? What is the actual language relation? One’s view of the nature of the actual language relation will be constrained to some extent by one’s view of the nature of language. But even once we have settled on a particular view of language, we may have a choice of views of the actual language relation. i... representation Chomsky’s view that a(n I-)language is a state of a mental faculty. We can give a theory of an I-language by stating a system of rules; Chomsky calls such a system a grammar. There is a simple and straightforward account of the natural language I-language is ones own language just in case one’s language faculty is in that state. But this answer is relatively superficial. We might want to know more about the nature of the relevant states. In particular, we might want to ask a question about the relation between I-languages and the grammars that are supposed to characterize them. Call the relation that obtains between an I-language and the grammar that characterizes it the actual grammar relation. What is it for one’s I-language to be in a state that is correctly characterized by a certain grammar? What is the actual grammar relation? Chomsky claims that grammars are internally represented; they are known or cognized by speakers. (“Cognize” is a technical term, designed to avoid objections to the idea that syntactic theory is known in some robust sense by competent speakers who are not syntacticians; cognized information is explicitly represented in some sense, but need not be consciously available (much less justified, etc.)) i... convention Conventionalism is the view that the actual language relation is a matter of communicative conventions that obtain in the speaker’s community. The best-known version of this view is Lewis’s (1983, 1992). On Lewis’s view, language crucially involves OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  conventions of truthfulness (according to which speakers try not to assert falsehoods) and trust (according to which speakers believe what other speakers assert). The semanticist, then, must describe a mapping between sentences and propositions which is such that a speaker s generally makes an assertion using a sentence just in case that sentence is mapped to a proposition s believes, and when s hears an assertion made using a sentence, she generally comes to believe the proposition which that sentence is mapped to; and such that these facts are conventional (according to Lewis’s (1969) analysis of convention). Grice develops a related view. For Grice, the fundamental notion of meaning relevant to language is a matter of a language user making an utterance with a certain complex intention: roughly, the intention that the audience comes to have a certain attitude as a result of recognizing that the utterer so intends. But this is an account of speaker- or occasion- meaning: it might be idiosyncratic to a particular speaker on a particular occasion. And, second, Grice’s notion of speaker meaning might be idiosyncratic to a particular speaker or occasion, and so might come apart from notions of standing meaning that (plausibly) semantics typically aims to characterize. Grice’s account of standing meaning depends on the notion of having a procedure in one’s repetoire: to a (very rough) first approximation, this is a matter of being disposed to perform the procedure under certain circumstances. The idea is that an utterance type u has a certain meaning p for a group just in case many members of the group have in their repetoires a procedure of using u to speaker mean p, and that it this procedure is conventional in the sense that “retention of this procedure [is] for them conditional on the assumption that at least some (other) members of F have, or have had, this procedure in their repertoires” (1957: 127). (Further refinements, which Grice only sketches, are necessary in order to explain the relationship between the meanings of complex expressions and the meanings of their parts.) i... interpretation We have already noted Davidson’s view that a semantic theory must state something that would enable someone who knew it to interpret speakers of a language. For Lewis, a language is a function from sentences to propositions; Davidson of course rejects the idea that a semantic theory should describe such a function, claiming instead that a truth theory could do the work he sets for semantics. But what is stated by a Davidsonian truth theory is the closest analogue in Davidson’s view to a language in Lewis’s view; so the analogue of the actual language relation is the relation that a speaker s stands in to a truth theory t exactly when someone who knows t would be in a position to interpret s’s utterances. Exactly what such a theory would require depends on one’s view of interpretation; Davidson himself suggests both that interpretation is a matter of being able to say what a speaker’s utterances mean (1984b: 125), and that interpretation requires an interpreter to “be able to understand any of the infinity of sentences the speaker might utter” (1984b: 126). OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Note how this interpretativist view differs from mentalism. Davidson does not claim that the theory is in fact one that actual speakers know or use in interpretation; they claim only that it could in principle be so used. There is a large literature discussing the adequacy of Davidson’s own truth-theoretic proposal (see e.g. Foster 1976; Soames 1992). A more general worry about interpretativist views would be that understanding is not a matter of theoretical knowledge, so that there is no theory knowledge of which would suffice for interpreting utterances. One version of this objection can be found in Dummett: understanding is a practical ability, which can be thought of as a matter of knowing certain propositions or a certain theory (e.g. a T-theory); but a complete account would require saying what “what it is for [a language user] to have that knowledge, that is, what we are taking as constituting a manifestation of a knowledge of those propositions” (Dummett 1993: 21). Glüer Chapter 8 in this volume discusses the role of the interpreter in Davidson’s semantics, and its consequences for the metaphysics of meaning. I.2.5 Epistemology and methodology How do we choose between different semantic theories? What evidence bears on semantics? A thorough treatment of these questions could fill a volume (or many); exactly how the question should even be posed will depend on (among other things) our stance on the issue of realism. We will content ourselves with distinguishing several types of question in the area and sketching some of the main issues. i... metaepistemology Recall Fodor’s complaint against the so-called “Wrong View”: it places a priori constraints on what evidence is relevant to linguistic theorizing, when good scientific method would leave this question open, to be answered empirically as investigation proceeds. One main point of disagreement between Fodor’s “Right” and “Wrong” views is, then: how do we know what evidence bears on linguistic theorizing? Is this a matter for stipulation—for example, perhaps semantics is just by definition in the business of systematizing intuitions about meanings? Or is it an empirical matter, to be decided by “what works” as the science progresses? i... evidence What evidence bears on semantic claims? For example, should we focus on elicited native speaker judgements (or “intuitions”)?40 If so, which judgements matter: judgements of truth values, felicity, truth conditions, appropriate translations or paraphrases, ambiguity, available readings, entailment relations? (See, e.g., Tonhauser and Matthewson 2015.) Or should we eschew elicited judgements in favour of corpus data? How do data about non-linguistic behaviours (such as those behaviours begun 40 See Dowty et al. (1981: 2–3) and Heim (2004): “The basic data of semantics are speakers’ judgements about the truth and falsity of actual or hypothetical utterances”. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  in response to commands), language acquisition, or the neurobiology of language bear on semantics? How is psycholinguistic data, such as studies of reaction times, relevant? i... methodology How should we go about studying semantics? To what extent are “armchair” methods (of the sort typically employed by typical mathematicians and philosophers) appropriate? To what extent are empirical, especially experimental, methods required? (See Jacobson’s Chapter 1 in this volume for discussion of this issue.) i... epistemology What is the nature of our knowledge of, or justification for, semantic claims? In particular, to what extent does this justification rely on experience? On one view (perhaps most naturally associated with views on which semantics studies an abstract object), our justification for semantic claims (like our justification for mathematical claims) is a priori. But others, perhaps especially those that emphasize the need for empirical evidence (experiment, corpus data, etc.) will regard this justification as a posteriori. I.2.6 Combining views We have discussed five points of disagreement among theories of semantics: explananda, realism, metaphysics, the actual language relation, and epistemology. Theorists who take a stand on one are likely also to take a stand on others, and there are a number of well-known “packages” associated with famous names in the literature. (Announcing that one is a Chomskian is a way to give one’s interlocutors a good idea about where one stands on all of these issues.) And some theorists have seen very tight connections between their views on the actual language relation and their views on the nature of language: for example, Larson and Segal’s (1995) view that the semanticist is making explicit a theory that is tacitly known leads them to maintain that a semantic theory should be a Davidsonian T-theory, while Jackendoff (1990) argues from a similar mentalistic starting point to the claim that a semantic theory should be a description of conceptual structures. In other cases, views are lumped together for reasons that are less clear. We began with Fodor’s “Right View” and “Wrong View”, each of which clearly lump together views on several issues. And Fodor is not alone: platonist views are often presented, both by their defenders and by their critics (e.g. Antony 2003; Laurence 2003; García Carpintero 2012), as contrasting or conflicting with the Chomsky-inspired view that semantics is devoted to the study of an aspect of the mind. But it clear at least in principle that there are versions of these views that need not conflict: for example, we can combine a platonist view of the nature of language, with (for example) a view of the natural language relation on which a person speaks a language iff she cognizes it, or a view on which languages are used to model or measure certain aspects of the minds OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern of speakers (see Glanzberg’s, Yalcin’s, and Ball’s Chapters 7, 12, and 14, respectively, in this volume for discussion of related issues). We should remember, therefore, that (in many or most cases) views on the various issues can be separated. We hope that one of the effects of the essays in this volume is to raise awareness of the sheer variety of possible views of the phenomena. Debate should not be restricted to the merits of well-known packages of views; many positions remain un- or under-explored. Postscript It is common to distinguish normative or prescriptive philosophers of science, who try to set down methodological rules that scientists must or should follow if they are to succeed, from descriptive philosophers of science, who are convinced that science is on the whole successful and seek to give an account of what scientists have in fact done to achieve this success. At least since the 1960s, philosophers of science have given the descriptive pride of place. Of course, no one thinks that it is impossible that scientists make mistakes; but the obvious success of science has made Baconian calls for wholesale methodological reform untenable. Still, there are holdouts, particularly in those areas of inquiry which were traditionally parts of philosophy and have only recently come under the umbrella of some science. Linguistics is a case in point. Though perhaps no philosopher has the chutzpah to prescribe a methodology for phonology or even for syntax, a great deal of philosophical work on meaning—the philosophy of semantics—still takes a prescriptive form. Thus certain Davidsonians argue that semantic theory must take the form of a T-theory, and must not assign entities as the meanings of predicates; Russellians argue that semantic theory must assign structured propositions to sentences; semantics minimalists argue that semantics must not attempt to account for certain contextual phenomena; and so forth. Some Wittgensteinians even argue that formal semantics is impossible. To some extent, this attitude is justifiable: semantics is, if no longer in its infancy, still at least in its childhood, and there remains a considerable degree of foundational disagreement among its practitioners. Moreover, philosophers like Frege, Tarski, Carnap, Montague, Lewis, Katz, and Davidson were the parents of present-day formal semantics, and philosophers still make significant contributions to first-order semantic theory. But the sophistication and success of contemporary semantics have grown with startling rapidity in the past half-century and are now too great to be denied. We now have detailed and sophisticated theories of phenomena that were not even known to exist in the 1970s. This success has so impressed some philosophers that they have attempted to apply the conclusions of formal semantics to philosophical debates. Thus Stanley (2005) argues that no analogue of the putative context-sensitivity of “knows” is known to semantic theory; Stanley and Williamson (2001) argue that knowledge how is a propositional attitude on the basis of formal semantic theories OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  of embedded question constructions; and King (2003) argues that propositions are not true and false only relative to times on the basis of contemporary formal semantic work on tense. Such arguments seem like a step in the right direction, but many have been criticized for taking on board the undefended philosophical presuppositions of the semanticists they cite, presuppositions which are inessential to the adequacy of the semantics. Even in the physical realm, we cannot simply take the pronouncements of physicists at face value; physicists are as fallible as anyone, and some of what passes for physics is little more than very questionable metaphysics. Still, it must be admitted that physicists are doing something right! How can we tell which of the conclusions of physicists are really motivated by their work as physicists? There is no recipe but careful examination of physical theory—that is to say, descriptive philosophy of science. We say the same is true of natural language semantics: the time has come for some descriptive philosophy of science. What are semanticists doing that has gone so right? Where are they needlessly importing questionable metaphysical (or epistemological, etc.) assumptions? And how does this relate to the semantic ideas of philosophers— for example, the sorts of debates about reference and description, and about the nature of propositions that have been the focus of so much philosophical literature in the past forty years? We present this volume as a step on the road to answering these questions. References Ajdukiewicz, K. (1935). Die syntaktische Konnexität. Studia Philosophica 1, 1–27. Alston, W. P. (2000). Illocutionary Acts and Sentence Meaning. Ithaca, NY: Cornell University Press. Antony, L. (2003). Rabbit-pots and supernovas: On the relevance of psychological data to linguistic theory In A. Barber (ed.), Epistemology of Language. Oxford: Oxford University Press, pp. 47–68. Armstrong, Joshua and Stanley, Jason (2011). Singular thoughts and singular propositions. Philosophical Studies 154(2), 205–22. Backus, J. W., Bauer, F. L., Green, J. et al. (1963). Revised report on the algorithmic language Algol 60, The Computer Journal 5(4), 349–67. Bar-Hillel, Y. (1953). A quasi-arithmetical notation for syntactic description, Language 29(1), 47–58. Bar-Hillel, Y. (1954). Indexical expressions. Mind 63(251), 359–79. Barber, A. (ed.) (2003). Epistemology of Language. Oxford: Oxford University Press. Bjerring, J. C. and Schwarz, W. (2017). Granularity problems. The Philosophical Quarterly 67(266), 22–37. Blumson, B. (2010). Pictures, perspective and possibility. Philosophical Studies 149(2), 135–51. Borg, E. (2004). Minimal Semantics. Oxford: Oxford University Press. Brandom, R. B. (1994). Making It Explicit: Reasoning, Representing, and Discursive Commitment. Cambridge, MA: Harvard University Press. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Burge, T. (2007). Wherein is language social?. In T. Burge (ed.), Foundations of Mind. Oxford: Oxford University Press, pp. 275–90. Carnap, Rudolph (1937/1959). Logical Syntax of Language. New Jersey: Littlefield, Adams, and Company, Paterson. Carnap, R. (1947). Meaning and Necessity: A Study in Semantics and Modal Logic. Chicago: University of Chicago Press. Chakravartty, A. (2014). Scientific realism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, spring edn. Stanford: Stanford University. Charlow, N. (2014). Logic and semantics for imperatives. Journal of Philosophical Logic 43(4), 617–64. Chierchia, G. and McConnell-Ginet, S. (2000). Meaning and Grammar: An Introduction to Semantics. Cambridge, MA: MIT Press. Chomsky, N. (1964). Current Issues in Linguistic Theory. The Hague: Mouton and Co. Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, N. (2000a). Internalist explorations. In New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press, pp. 164–94. Chomsky, N. (2000b). Language as a natural object. In New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press, pp. 106–33. Chomsky, N. New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press. Copeland, B. (2002). The genesis of possible worlds semantics. Journal of Philosophical Logic 31(2), 99–137. Cresswell, M. (1982). The autonomy of semantics. In S. Peters and E. Saarinen (eds), Processes, Beliefs, and Questions, Vol. 16. Amsterdam: Springer Netherlands, pp. 69–86. Cresswell, M. (1990). Entities and Indices. Dordrecht: Kluwer Academic. Davidson, D. (1967). Truth and meaning. Synthese 17(1), 304–23. Davidson, D. (1984a). Inquiries into Truth and Interpretation. Oxford: Oxford University Press. Davidson, D. (1984b). Radical interpretation. Inquiries into Truth and Interpretation. Oxford: Oxford University Press, pp. 125–40. Davidson, D. (1984c). Reply to Foster. Inquiries into Truth and Interpretation. Oxford: Oxford University Press, pp. 171–9. Davies, M. and Humberstone, L. (1980). Two notions of necessity. Philosophical Studies 38(1), 1–31. Dever, J. (1998). Variables, PhD thesis, University of California, Berkeley. Dever, J. (1999). Compositionality as methodology. Linguistics and Philosophy 22(3), 311–26. Dever, J. (2006). Living the life aquatic: Does presupposition accommodation mandate dynamic semantics? Unpublished manuscript. https://www.dropbox.com/s/2fgkop5u8pll5b2/OSU %20Accommodation3.pdf. Devitt, M. (2006). Ignorance of Language. Oxford: Oxford University Press. Dowty, D. (2007). Compositionality as an empirical problem. In C. Barker and P. I. Jacobson (eds), Direct Compositionality. Oxford: Oxford University Press, pp. 14–23. Dowty, D. R., Wall, R., and Peters, S. (1981). Introduction to Montague semantics, Synthese Language Library 11. Dordrecht: Reidel. Duhem, P. (1981). The Aim and Structure of Physical Theory. New York: Athenaeum. Dummett, M. (1973). Frege: Philosophy of Language. London: Gerald Duckworth. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Dummett, M. (1993). What is a theory of meaning? (i). In M. Dummett (ed.), The Seas of Language. Oxford: University Press, pp. 1–33. Elbourne, P. (2005). Situations and Individuals. Cambridge, MA: The MIT Press. Evans, G. (1977). Pronouns, quantifiers, and relative clauses (I). Canadian Journal of Philosophy 7(3), 467–536. Evans, G. (1979). Reference and contingency. The Monist 62, 161–89. Fodor, J. A. (1983). The Modularity of Mind. Cambridge, MA: The MIT Press. Fodor, J. (1985). Some notes on what linguistics is about. In J. J. Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, pp. 146–60. Foster, J. (1976). Meaning and truth theory. In G. Evans and J. McDowell (eds), Truth and Meaning: Essays in Semantics, Oxford University Press. Frege, G. (1923/1963). Logische Untersuchungen. Dritter Teil: Gedankengefüge. Beiträge zur Philosophie des deutschen Idealismus III, 36–51. (Translation by R. Stoothoff, Compound thoughts. Mind 72(1963): 1–17.) Frege, Gottlob (1983/2013). Grundgesetze der Arithmetik. Jena: Hermann Pohle, 1893/1903. (Translation by Philip A. Ebert and Marcus Rossberg, Frege: Basic Laws of Arithmetic. Oxford: Oxford University Press, 2013). García Carpintero, M. (2012). Foundational semantics i: Descriptive accounts. Philosophy Compass 7(6), 397–409. Gibbard, A. (2003). Thinking How to Live. Cambridge, MA: Harvard University Press. Gibbard, A. (2012). Meaning and Normativity. Oxford: Oxford University Press. Glanzberg, M. (2009). Semantics and truth relative to a world. Synthese 166(2), 281–307. Greenberg, G. (2013). Beyond resemblance. Philosophical review 122(2), 215–87. Grice, P. (1957). Meaning. The Philosophical Review 66(3), 377–88. Groenendijk, J. and Stokhof, M. (1991). Dynamic predicate logic. Linguistics and philosophy 14(1), 39–100. Hamblin, C. L. (1973). Questions in Montague English. Foundations of Language 10(1), 41–53. Harris, D. (2017). The history and prehistory of natural language semantics. In S. Lapointe and C. Pincock (eds), Innovations in the History of Analytical Philosophy. London: Palgrave Macmillan. Hartshorn, C. and Weiss, P. (eds) (1933). Collected Papers of Charles Sanders Peirce, volume IV: The Simplest Mathematics. Cambridge, MA: Harvard University Press. Heim, I. (1982). The semantics of definite and indefinite noun phrases, PhD thesis, University of Massachusetts Amherst. Heim, I. (2004). Lecture notes on indexicality. Notes for class taught at MIT Unpublished, available online at http://web.mit.edu/24.954/www/files/ind_notes.html. Heim, I. and Kratzer, A. (1998). Semantics in Generative Grammar. Oxford: Blackwell Publishers. Hintikka, J. (1957). Modality as referential multiplicity. Ajatus 20, 49–64. Horwich, P. (2005). Reflections on Meaning. Oxford: Oxford University Press. Jackendoff, R. (1990). Semantic Structures. Cambridge, MA: The MIT Press. Jackson, F. (2001). Locke-ing onto content. Royal Institute of Philosophy Supplement 49, 127–43. Jacobson, P. (2014). Compositional Semantics: An Introduction to the Syntax/Semantics Interface. Oxford: Oxford University Press. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Kamp, H. (1967). The treatment of “now” as a 1-place sentential operator. Multilith. Los Angeles: University of California in Los Angeles. Kamp, H. (1971). Formal properties of “now”. Theoria 37, 227–74. Kamp, H. (1981). A theory of truth and semantic representation. Formal Semantics, Blackwell, pp. 189–222. Kamp, H. and Reyle, U. (1993). From Discourse to Logic. Dordrecht: Kluwer. Kaplan, D. (1989). Demonstratives. In J. Almog, J. Perry, and H. Wettstein (eds), Themes from Kaplan. Oxford: Oxford University Press, pp. 481–563. Kaplan, D. (2004). The meaning of “ouch” and “oops”. Transcribed by Elizabeth Coppock, Howison Lecture in Philosophy. https://youtu.be/iaGRLlgPl6w Karttunen, L. (1977). Syntax and semantics of questions. Linguistics and Philosophy 1(1), 3–44. Katz, J. J. (1981). Language and Other Abstract Objects. Totowa, NJ: Rowman and Littlefield. Katz, J. J. (ed.) (1985). The Philosophy of Linguistics. Oxford: Oxford University Press. Katz, J. J. (1990). The Metaphysics of Meaning. Cambridge, MA: The MIT Press. Kayne, R. S. (1983). Connectedness and Binary-branching. Dordrecht: Foris Publications. Kennedy, C. (2007). Vagueness and grammar: The semantics of relative and absolute gradable adjectives. Linguistics and philosophy 30(1), 1–45. King, J. C. (2003). Tense, modality, and semantic values. Philosophical Perspectives 17, 195–246. King, J. (2007). The Nature and Structure of Content. Oxford: Oxford University Press. King, Jeffrey C. (2015). Acquaintance, singular thought, and propositional constituency. Philosophical Studies 172(2), 543–60. Kripke, S. (1959). A completeness theorem in modal logic. The Journal of Symbolic Logic 24(1), 1–14. Kripke, S. (1963). Semantical considerations on modal logic. Acta philosophica fennica 16, 83–94. Kripke, S. A. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press. Kripke, S. A. (1982). Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard University Press. Larson, R. and Segal, G. (1995). Knowledge of Meaning: An Introduction to Semantic Theory. Cambridge, MA: MIT Press. Lasersohn, P. (2016). Subjectivity and Perspective in Truth-Theoretic Semantics. Oxford: Oxford University Press. Laurence, S. (2003). Is linguistics a branch of psychology? In A. Barber (ed.) (2003). Epistemology of Language. Oxford: Oxford University Press, pp. 69–106. Leplin, J. (1984). Introduction, In J. Leplin (ed.), Scientific Realism. Berkeley: University of California Press, pp. 1–7. Lewis, D. (1969). Convention. Cambridge, MA: Harvard University Press. Lewis, D. (1970). General semantics. Synthese 22(1), 18–67. Lewis, D. (1980). Index, context and content. In S. Kanger and S. Ohman (eds), Philosophy and Grammar. Amsterdam: Reidel, pp. 79–100. Lewis, D. (1983). Languages and language. Philosophical Papers Vol. I. New York: Oxford University Press, pp. 163–88. Lewis, D. K. (1986). On the Plurality of Worlds. Cambridge: Cambridge University Press. Lewis, D. (1992). Meaning without use: Reply to Hawthorne. Australasian Journal of Philosophy 70(1), 106–10. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Lewis, K. S. (2014). Do we need dynamic semantics? In A. Burgess and B. Sherman (eds), Metasemantics. Oxford: Oxford University Press, pp. 231–58. Lewis, K. S. (forthcoming). Dynamic semantics. Oxford Handbooks Online. www. oxfordhandbooks.com. Ludlow, P. (2011). The Philosophy of Generative Linguistics. Oxford: Oxford University Press. MacFarlane, J. (2014). Assessment Sensitivity: Relative Truth and its Applications. Oxford: Oxford University Press. MacFarlane, J. (2016). Vagueness as indecision. Aristotelian Society Supplementary Volume 90(1), 255–83. Menzel, C. (2016). Possible worlds. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, winter edn. Stanford: Metaphysics Research Lab, Stanford University. Montague, R. (1960). Logical necessity, physical necessity, ethics, and quantifiers, Inquiry 3(1), 259–69. (Reprinted in R. Montague and R. Thomason (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press.) Montague, R. (1968). Pragmatics. In R. Klibansky (ed.), Contemporary Philosophy: A Survey, Vol. 1, La Nuova Italia Editrice, pp. 102–22. (Reprinted in R. Montague and R. Thomason (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press 1974.) Montague, R. (1970a). Pragmatics and intensional logic. Synthese 22(1), 68–94. (Reprinted in R. Montague and R. Thomason (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press.) Montague, R. (1970b). Universal grammar. Theoria 36(3), 373–98. (Reprinted in R. Montague and R. Thomason (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press.) Montague, R. (1973). The proper treatment of quantification in ordinary English. Approaches to Natural Language 49, 221–42. (Reprinted in R. Montague and R. Thomason (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press.) Montague, R. and Thomason, R. (1974). Formal Philosophy: Selected Papers of Richard Montague. New Haven: Yale University Press. Neale, S. (1990). Descriptions. Cambridge, MA: The MIT Press. Ninan, D. (2010). Semantics and the objects of assertion. Linguistics and Philosophy 33(5), 335–80. Øhrstrøm, P. and Hasle, P. (1995). Temporal Logic: From Ancient Ideas to Artificial Intelligence. Dordrecht: Kluwer Academic Publishers. Partee, B. (1984). Compositionality. In F. Landman and F. Veltman (eds), Varieties of Formal Semantics: Proceedings of the 4th Amsterdam Colloquium. Foris Publishers, pp. 281–311. Partee, B. (2004). Compositionality in Formal Semantics: Selected Papers. Oxford: WileyBlackwell. Partee, B. H. (2011). Formal semantics: Origins, issues, early impact. Baltic International Yearbook of Cognition, Logic and Communication 6(1), 13. Pickel, B. (forthcoming). Structured propositions in a generative grammar. Mind. Pietroski, P. M. (2003). The character of natural language semantics. In A. Barber (ed.) (2003). Epistemology of Language. Oxford: Oxford University Press, pp. 217–56. Portner, P. (2005). What is Meaning? Fundamentals of Formal Semantics. Oxford: Blackwell. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi  derek ball and brian rabern Portner, P. (2016). Imperatives. In M. Aloni and P. Dekker (eds), The Cambridge Handbook of Formal Semantics. Cambridge: Cambridge University Press, pp. 593–626. Portner, P. and Partee, B. (2002). Formal Semantics: The Essential Readings. Oxford: WileyBlackwell. Potts, C. (2007). The expressive dimension. Theoretical linguistics 33(2), 165–98. Prior, A. (1956). Modality and quantification in S5. The Journal of Symbolic Logic 21(1), 60–2. Prior, A. (1957). Time and Modality. Oxford: Oxford University Press. Prior, A. (1968). Egocentric logic. Nous 2(3), 191–207. Psillos, S. (1999). Scientific Realism: How Science Tracks Truth. London: Routledge. Quine, W. (1960). Word and Object. Cambridge, MA: The MIT Press. Rabern, B. (2012). Against the identification of assertoric content with compositional value. Synthese 189(1), 75–96. Rabern, B. (2013). Monsters in Kaplan’s logic of demonstratives. Philosophical Studies 164(2), 393–404. Rabern, B. (2016). The history of the use of !."-notation in natural language semantics. Semantics & Pragmatics 9, 12. Rabern, B. (2017). A bridge from semantic value to content. Philosophical Topics 45(2), 181–207. Reichenbach, H. (1947). Elements of Symbolic Logic. New York: Macmillan. Ripley, D. (2012). Structures and circumstances: Two ways to fine-grain propositions. Synthese 189(1), 97–118. Rothschild, D. and Yalcin, S. (2016). Three notions of dynamicness in language. Linguistics and Philosophy 39(4), 333–55. Rothschild, D. and Yalcin, S. (2017). On the dynamics of conversation. Nous 51(1), 24–48. Salmon, N. (1986). Frege’s Puzzle. Cambridge, MA: MIT Press. Schlenker, P., Chemla, E., Arnold, K. et al. (2014). Monkey semantics: Two “dialects” of campbell’s monkey alarm calls. Linguistics and Philosophy 37(6), 439–501. Scott, D. (1970). Advice on modal logic. In K. Lambert (ed.), Philosophical Problems in Logic: Some Recent Developments. Amsterdam: D. Reidel, pp. 143–73. Seyfarth, R. M., Cheney, D. L., and Marler, P. (1980). Monkey responses to three different alarm calls: Evidence of predator classification and semantic communication. Science 210(4471), 801–3. Skyrms, B. (2010). Signals: Evolution, Learning, and Information. Oxford: Oxford University Press. Soames, S. (1985). Semantics and psychology. In J. J. Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, pp. 204–26. Soames, S. (1987). Direct reference, propositional attitudes, and semantic content. Philosophical Topics 15(1), 47–87. Soames, S. (1992). Truth, meaning, and understanding. Philosophical Studies 65, 17–35. Soames, S. (2010). Philosophy of Language. Princeton: Princeton University Press. Stalnaker, R. C. (1970). Pragmatics. Context and Content: Essays on Intentionality in Speech and Thought, Oxford University Press, pp. 31–46. Stalnaker, R. (1978). Assertion. Syntax and Semantics 9, 315–32. Stalnaker, R. (1984). Inquiry. Cambridge: Cambridge University Press. Stalnaker, R. (2014). Context. Oxford: Oxford University Press. OUP UNCORRECTED PROOF – FIRST PROOF, 8/3/2018, SPi introduction to the science of meaning  Stanley, J. (1997). Rigidity and content. In R. Heck (ed.), Language, Thought, and Logic: Essays in Honor of Michael Dummett. Oxford: Oxford University Press, pp. 131–56. Stanley, J. (2005). Knowledge and Practical Interests. Oxford: Oxford University Press. Stanley, J. and Williamson, T. (2001). Knowing how. Journal of Philosophy 98(8), 411–44. Stich, S. (1985). Grammar, psychology, and indeterminacy. In J. J. Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, pp. 126–45. Stoljar, D. (2015). Chomsky, London and Lewis. Analysis 75(1), 16. Strawson, P. (1950). On referring. Mind 59(235), 320–44. Tarski, A. (1936). The concept of truth in formalized languages. Logic, Semantics, Metamathematics, Oxford University Press, pp. 152–278. Tonhauser, J. and Matthewson, L. (2015). Empirical evidence in research on meaning. http://ling.auf.net/lingbuzz/002595. van Benthem, J. (1986). Essays in Logical Semantics. Amsterdam: Springer. van Fraassen, B. C. (1980). The Scientific Image. Oxford: Oxford University Press. Wittgenstein, L. (1922). Tractatus Logico-Philosophicus. trans. C. K. Ogden. London: Routledge & Kegan Paul. Wittgenstein, L. (1953). Philosophical Investigations, trans. G. E. M. Anscombe. Oxford: Basil Blackwell. Wright, C. (1993). Realism, Meaning and Truth, 2nd edn. Oxford: Blackwell. Yalcin, S. (2012). Introductory notes on dynamic semantics. In D. G. Fara and G. Russell (eds), The Routledge Companion to the Philosophy of Language. London: Routledge. Yalcin, S. (2014). Semantics and metasemantics in the context of generative grammar. In A. Burgess and B. Sherman (eds), Metasemantics: New Essays on the Foundations of Meaning. Oxford: Oxford University Press. Zimmermann, T. E. and Sternefeld, W. (2013). Introduction to Semantics. Berlin: De Gruyter Mouton.