Salvatore Florio, Stewart Shapiro, and Eric Snyder
It is widely (but not universally) held that logical consequence is determined (at least in part) by the meanings of the logical terminology. One might think that this is an empirical claim that can be tested by the usual methods of linguistic semantics. Yet most philosophers who hold views about logic like this do not engage in empirical research to test the main thesis. Sometimes the thesis is just stated, without argument, and sometimes it is argued for on a priori grounds. Moreover, many linguistic studies of words like “or”, the conditional, and the quantifiers run directly contrary to the thesis in question.
From the other direction, much of the work in linguistic semantics uses logical symbols. For example, it is typical for a semanticist to write a biconditional, in a formal language, whose left hand side has a symbol for the meaning of an expression in natural language and whose right hand side is a formula consisting of lambda-terms and other symbols from standard logic works: quantifiers ∀, ∃, and connectives ¬, →, ∧, ∨, ↔. This enterprise thus seems to presuppose that readers already understand the formal logical symbols, and the semanticist uses this understanding to shed light on the meanings of expressions in natural language. This occurs even if the natural language expressions are natural language terms corresponding to the logical ones: “or”, “not”, “forall”, and the like.
The purpose of this talk is to explore the relation between logic and the practice of empirical semantics, hoping to shed light, in some way, on both enterprises.
In the mid-1970s, Gregory Chaitin proved a novel incompleteness theorem, formulated in terms of Kolmogorov complexity, a measure of complexity that features prominently in algorithmic information theory. Chaitin further claimed that his theorem provides insight into both the source and scope of the incompleteness phenomenon, a claim that has been subject to much criticism. In this talk, I consider a new strategy for vindicating Chaitin’s claims, one informed by recent work of Bienvenu, Romashchenko, Shen, Taveneaux, and Vermeeren that extends and refines Chaitin’s incompleteness theorem. As I argue, this strategy, though more promising than previous attempts, fails to vindicate Chaitin’s claims. Lastly, I will suggest an alternative interpretation of Chaitin’s theorem, according to which the theorem indicates a trade-off that comes from working with a sufficiently strong definition of randomness—namely, that we lose the ability to certify randomness.
Wh-questions with the modal verb can admit both mention-some (MS) and mention-all (MA) answers. This paper argues that we should treat MS as a grammatical phenomenon, primarily determined by the grammar of the wh-interrogative. I assume that MS and MA answers can be modeled using the same definition of answerhood (Fox 2013) and attribute the MS/MA ambiguity to structural variations within the question nucleus. The variations are: (i) the scope ambiguity of the higher-order wh-trace, and (ii) the absence/presence of an anti-exhaustification operator. However, treating MS answers as complete answers in this way contradicts the widely adopted analysis of uniqueness effects in questions of Dayal 1996, according to which the uniqueness effects of singular which-phrases arise from an exhaustivity presupposition, namely that a question must have a unique exhaustive true answer. To solve this dilemma, I propose that question interpretations presuppose ‘Relativized Exhaustivity’: roughly, the exhaustivity in questions is evaluated relative to the accessible worlds as opposed to the anchor/utterance world. Relativized Exhaustivity preserves the merits of Dayal’s exhaustivity presupposition while permitting MS; moreover, it explains the local-uniqueness effects in modalized singular wh-questions.
The speaker also has a relevant manuscript on Lingbuzz: https://ling.auf.net/lingbuzz/005322
Cooperation and Determining When Merely Verbal Disputes are Worthwhile
Teresa Kouri Kissel
Merely verbal disputes are often thought to be disputes about language that are not worthwhile. Recent work suggests that some merely verbal disputes may not be problematic in this way (see, for example, Balcerak Jackson (2014), Belleri (2020) and Mankowitz (2021)). In this paper, I propose that this recent work misses one crucial point: interlocutors have to cooperate in the right kinds of ways in order for a dispute to be worthwhile. Using the question-under-discussion framework developed in Roberts (2012), I provide a form of cooperation which I show can distinguish between merely verbal disputes which are and are not worthwhile.
If this paper is correct that sometimes what makes disputes about language worthwhile is that the interlocutors are willing to cooperate in the right kinds of ways, then there is a critical upshot: interlocutors can control whether their dispute is worth their time. That is, if interlocutors decide to treat what each other is saying as true for the purposes of the conversation, or if they manage to come to some compromise about some things they are both willing to accept as true, then they can go from having a worthless dispute to having a worthwhile one.
Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. One can implement the deflationist insight in the pragmatist’s theory of content by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. There are two upshots. First, a new diagnosis of the Liar, Revenges and attendant paradoxes: the paradoxes require that truth rules preserve evidence, but they only preserve commitment. Second, one straightforwardly obtains axiomatisations of several supervaluational hierarchies, answering the question of how such theories are to be naturally axiomatised. This is joint work with Luca Incurvati (Amsterdam).
Salvatore Florio, Stewart Shapiro, and Eric Snyder
Atomistic classical mereology and plural logic provide two alternative frameworks for the analysis of plurals in natural language. It is a matter of dispute which framework is preferable. From the formal point of view, however, the two frameworks can be shown to be definitionally equivalent. So they have the same coverage as each other: there is a range of data that they both capture correctly and a range of data that they both fail to capture or get wrong. We argue that the tie is broken when we consider a wider range of linguistic phenomena, such as mass nouns and group nouns. Mereology is more flexible than plural logic and is thus more easily adapted to account for these richer fragments of natural language.
The aim of this talk is to examine the class of Gentzen-style sequent calculi where Cut is admissible but not derivable that prove all the (finite) inferences that are usually taken to characterize Classical Logic—conceived with conjunctively-read multiple premises and disjunctively-read multiple conclusions. We’ll do this starting from two different calculi, both counting with Identity and the Weakening rules in unrestricted form. First, we’ll start with the usual introduction rules and consider what expansions thereof are appropriate. Second, we’ll start with the usual elimination or inverted rules and consider what expansions thereof are appropriate. Expansions, in each case, may or may not consist of additional standard or non-standard introduction or elimination rules, as well as of restricted forms of Cut.
Murzi & Rossi (2020) put forward a recipe for generating revenge arguments against any non-classical theory of semantic notions that can recapture classical logic for a set of sentences X provided X is closed under certain classical-recapturing principles. More precisely, Murzi & Rossi show that no such theory can be non-trivially closed under natural principles for paradoxicality and unparadoxicality.
In a recent paper, Lucas Rosenblatt objects that Murzi & Rossi’s principles are not so natural, and that non-classical theories can express perfectly adequate, and yet unparadoxical, notions of paradoxicality.
I argue that Rosenblatt’s strategy effectively amounts to fragmenting the notion of paradoxicality, much in the way Tarski’s treatment of the paradoxes fragments the notion of truth. Along the way, I discuss a different way of resisting Murzi & Rossi’s revenge argument, due to Luca Incurvati and Julian Schlöder, that doesn’t fragment the notion of paradoxicality, but that effectively bans paradoxical instances of semantic rules within subproofs, on the assumption that they are not evidence-preserving.