Distinguishing between merely verbal disputes and metalinguistic negotiations
Teresa Kouri Kissel
Merely verbal disputes are, roughly, disputes about language that are not worthwhile. Metalinguistic negotiations, on the other hand, are roughly disputes about language which are worthwhile. Recent work suggests that some merely verbal disputes may not be problematic in this way (see, for example, Balcerak Jackson (2014), Belleri (2020) and Mankowitz (2021)). In this paper, I propose that this recent work misses one crucial point: interlocutors have to cooperate in the right kinds of ways in order for a dispute to be worthwhile. Using the question-under-discussion framework developed in Roberts (2012), I provide a form of cooperation which I show can distinguish between merely verbal disputes and metalinguistic negotiations.
If this paper is correct that sometimes what makes disputes about language worthwhile is that the interlocutors are willing to cooperate in the right kinds of ways, then there is a critical upshot: interlocutors can control whether their dispute is worth their time. That is, if interlocutors decide to treat what each other is saying as true for the purposes of the conversation, or if they manage to come to some compromise about some things they are both willing to accept as true, then they can go from having a worthless dispute to having a worthwhile one.
Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. One can implement the deflationist insight in the pragmatist’s theory of content by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. There are two upshots. First, a new diagnosis of the Liar, Revenges and attendant paradoxes: the paradoxes require that truth rules preserve evidence, but they only preserve commitment. Second, one straightforwardly obtains axiomatisations of several supervaluational hierarchies, answering the question of how such theories are to be naturally axiomatised. This is joint work with Luca Incurvati (Amsterdam).
Salvatore Florio, Stewart Shapiro, and Eric Snyder
Atomistic classical mereology and plural logic provide two alternative frameworks for the analysis of plurals in natural language. It is a matter of dispute which framework is preferable. From the formal point of view, however, the two frameworks can be shown to be definitionally equivalent. So they have the same coverage as each other: there is a range of data that they both capture correctly and a range of data that they both fail to capture or get wrong. We argue that the tie is broken when we consider a wider range of linguistic phenomena, such as mass nouns and group nouns. Mereology is more flexible than plural logic and is thus more easily adapted to account for these richer fragments of natural language.
The aim of this talk is to examine the class of Gentzen-style sequent calculi where Cut is admissible but not derivable that prove all the (finite) inferences that are usually taken to characterize Classical Logic—conceived with conjunctively-read multiple premises and disjunctively-read multiple conclusions. We’ll do this starting from two different calculi, both counting with Identity and the Weakening rules in unrestricted form. First, we’ll start with the usual introduction rules and consider what expansions thereof are appropriate. Second, we’ll start with the usual elimination or inverted rules and consider what expansions thereof are appropriate. Expansions, in each case, may or may not consist of additional standard or non-standard introduction or elimination rules, as well as of restricted forms of Cut.
Murzi & Rossi (2020) put forward a recipe for generating revenge arguments against any non-classical theory of semantic notions that can recapture classical logic for a set of sentences X provided X is closed under certain classical-recapturing principles. More precisely, Murzi & Rossi show that no such theory can be non-trivially closed under natural principles for paradoxicality and unparadoxicality.
In a recent paper, Lucas Rosenblatt objects that Murzi & Rossi’s principles are not so natural, and that non-classical theories can express perfectly adequate, and yet unparadoxical, notions of paradoxicality.
I argue that Rosenblatt’s strategy effectively amounts to fragmenting the notion of paradoxicality, much in the way Tarski’s treatment of the paradoxes fragments the notion of truth. Along the way, I discuss a different way of resisting Murzi & Rossi’s revenge argument, due to Luca Incurvati and Julian Schlöder, that doesn’t fragment the notion of paradoxicality, but that effectively bans paradoxical instances of semantic rules within subproofs, on the assumption that they are not evidence-preserving.
The model and proof theory of classical first-order logic are a staple of introductory logic courses: we have nice proof systems, well-understood notions of models, validity, and consequence, and a proof of completeness. The story of how these were developed in the 1920s, 30s, and even 40s usually consists in simply a list of results and who obtained them when. What happened behind the scenes is much less well known. The talk will fill in some of that back story and show how philosophical, methodological, and practical considerations shaped the development of the conceptual framework and the direction of research in these formative decades. Specifically, I’ll discuss how the work of Hilbert and his students (Behmann, Schönfinkel, Bernays, and Ackermann) on the decision problem in the 1920s led from an almost entirely syntactic approach to logic to the development of first-order semantics that made the completeness theorem possible.
Simple taste predications typically come with an ‘acquaintance requirement’: they normally require the speaker to have had a certain kind of first-hand experience with the object of predication. For example, if I told you that the crème caramel is delicious, you would ordinarily assume that I have actually tasted the crème caramel and am not simply relying on the testimony of others. The present essay argues in favor of a ‘lightweight’ expressivist account of the acquaintance requirement. This account consists of a recursive semantics and a ‘supervaluational’ account of assertion; it is compatible with a number of different accounts of truth and content, including contextualism, relativism, and purer forms of expressivism. The principal argument in favor of this account is that it correctly predicts a wide range of data concerning how the acquaintance requirement interacts with Boolean connectives, generalized quantifiers, epistemic modals, and attitude verbs.
In this paper, we provide a recipe that not only captures the common structure between semantic paradoxes, but it also captures our intuitions regarding the relations between these paradoxes. Before we unveil our recipe, we first talk about a popular schema introduced by Graham Priest, namely, the inclosure schema. Without rehashing previous arguments against the inclosure schema, we contribute different arguments for the same concern that the inclosure schema bundles the wrong paradoxes together. That is, we will provide alternative arguments on why the inclosure schema is both too broad for including the Sorites paradox, and too narrow for excluding Curry’s paradox.
We then spell out our recipe. Our recipe consists of three ingredients: (1) a predicate that has two specific rules, (2) a simple method to find a partial negative modality, and (3) a diagonal lemma that would allow us to let sentences be their partial negative modalities. The recipe shows that all of the following paradoxes share the same structure: The Liar, Curry’s paradox, Validity Curry, Provability Liar, a paradox leading to Löb’s theorem, Knower’s paradox, Knower’s Curry, Grelling-Nelson’s paradox, Russell’s paradox in terms of extensions, alternative liar and alternative Curry, and other new paradoxes.
We conclude the paper by stating the lessons that we can learn from the recipe, and what kind of solutions does the recipe suggest if we want to adhere to the Principle of Uniform Solution.
Stalnaker’s minimal change semantics for conditionals fails to support the import-export law, according to which (a) and (b) are logically equivalent:
(a) if A, then if B, then C
(b) if A and B, then C
However, natural language conditionals seem to abide by the law. McGee (1985) outlines a minimal change semantics for conditionals that supports it. I argue that, in fact, the equivalence between (a) and (b) does not hold unrestrictedly, and I suggest that the facts follow from the interaction between the semantics of conditionals and the ways suppositions may affect the context. I conclude by describing the consequences of my account for the issue of the validity of modus ponens.
Circumstantialists already have a logical semantics for impossibilities. They expand their logical space of possible worlds by adding impossible worlds. These are impossible circumstances serving as indices of evaluation, at which impossibilities are true. A variant of circumstantialism, namely modal Meinongianism, adds impossible objects as well. The opposite of circumstantialism, namely structuralism, has some catching-up to do. What might a structuralist logical semantics without impossible worlds or impossible objects look like? This paper makes a structuralist counterproposal. I present a semantics based on a procedural interpretation of the typed l-calculus. The fundamental idea is that talk about impossibilities should be construed in terms of procedures yielding as their product a condition that could not possibly have a satisfier, or else failing to yield a product at all. Dispensing with a ‘bottom’ of impossibilia requires instead a ‘top’ consisting of structured hyperintensions, intensions, intensions defining other intensions, a typed universe, and dual predication. I explain how the theory works by going through a couple of cases.