Abstracts

On sequent calculi for Classical Logic where Cut is admissible

Damian Szmuc

The aim of this talk is to examine the class of Gentzen-style sequent calculi where Cut is admissible but not derivable that prove all the (finite) inferences that are usually taken to characterize Classical Logic—conceived with conjunctively-read multiple premises and disjunctively-read multiple conclusions. We’ll do this starting from two different calculi, both counting with Identity and the Weakening rules in unrestricted form. First, we’ll start with the usual introduction rules and consider what expansions thereof are appropriate. Second, we’ll start with the usual elimination or inverted rules and consider what expansions thereof are appropriate. Expansions, in each case, may or may not consist of additional standard or non-standard introduction or elimination rules, as well as of restricted forms of Cut.

Revenge

Julien Murzi

Murzi & Rossi (2020) put forward a recipe for generating revenge arguments against any non-classical theory of semantic notions that can recapture classical logic for a set of sentences X provided X is closed under certain classical-recapturing principles. More precisely, Murzi & Rossi show that no such theory can be non-trivially closed under natural principles for paradoxicality and unparadoxicality.

In a recent paper, Lucas Rosenblatt objects that Murzi & Rossi’s principles are not so natural, and that non-classical theories can express perfectly adequate, and yet unparadoxical, notions of paradoxicality.

I argue that Rosenblatt’s strategy effectively amounts to fragmenting the notion of paradoxicality, much in the way Tarski’s treatment of the paradoxes fragments the notion of truth. Along the way, I discuss a different way of resisting Murzi & Rossi’s revenge argument, due to Luca Incurvati and Julian Schlöder, that doesn’t fragment the notion of paradoxicality, but that effectively bans paradoxical instances of semantic rules within subproofs, on the assumption that they are not evidence-preserving.

Semantics of First-order Logic: The Early Years

Richard Zach

The model and proof theory of classical first-order logic are a staple of introductory logic courses: we have nice proof systems, well-understood notions of models, validity, and consequence, and a proof of completeness. The story of how these were developed in the 1920s, 30s, and even 40s usually consists in simply a list of results and who obtained them when. What happened behind the scenes is much less well known. The talk will fill in some of that back story and show how philosophical, methodological, and practical considerations shaped the development of the conceptual framework and the direction of research in these formative decades. Specifically, I’ll discuss how the work of Hilbert and his students (Behmann, Schönfinkel, Bernays, and Ackermann) on the decision problem in the 1920s led from an almost entirely syntactic approach to logic to the development of first-order semantics that made the completeness theorem possible.

The Logic of Speech Acts: Sentential Force vs Utterance Force

Sarah Murray

Across languages, sentences are marked for sentence type, or sentential mood, e.g., declarative and interrogative. These sentence types are associated with speech acts: assertions and questions, respectively. However, sentential mood does not determine the force of an utterance of a sentence. We argue that the semantic contribution of sentential mood is a relation that constrains utterance force. This relation takes a proposition as an argument and uses it to affect a component of the context. The semantic constraint together with additional pragmatic factors produce utterance force.

This logic for speech acts involves a semantics for the three main sentence types found cross-linguistically (declarative, interrogative, imperative) as well as a distinction between speaker commitment and discourse reference. In addition to a semantics for sentential mood, this approach provides a framework for a range of phenomena, including evidentials, parentheticals, hedges, and “speech act modifiers”. We conclude by discussing the Linguistic Modification Thesis, the idea that linguistic material can only influence utterance force by influencing sentential force.

This talk is based on joint work with William Starr

An Expressivist Theory of Taste Predicates

Dilip Ninan

Simple taste predications typically come with an ‘acquaintance requirement’: they normally require the speaker to have had a certain kind of first-hand experience with the object of predication. For example, if I told you that the crème caramel is delicious, you would ordinarily assume that I have actually tasted the crème caramel and am not simply relying on the testimony of others. The present essay argues in favor of a ‘lightweight’ expressivist account of the acquaintance requirement. This account consists of a recursive semantics and a ‘supervaluational’ account of assertion; it is compatible with a number of different accounts of truth and content, including contextualism, relativism, and purer forms of expressivism. The principal argument in favor of this account is that it correctly predicts a wide range of data concerning how the acquaintance requirement interacts with Boolean connectives, generalized quantifiers, epistemic modals, and attitude verbs.

A Recipe for Paradox

Rashed Ahmad

In this paper, we provide a recipe that not only captures the common structure between semantic paradoxes, but it also captures our intuitions regarding the relations between these paradoxes. Before we unveil our recipe, we first talk about a popular schema introduced by Graham Priest, namely, the inclosure schema. Without rehashing previous arguments against the inclosure schema, we contribute different arguments for the same concern that the inclosure schema bundles the wrong paradoxes together. That is, we will provide alternative arguments on why the inclosure schema is both too broad for including the Sorites paradox, and too narrow for excluding Curry’s paradox.

We then spell out our recipe. Our recipe consists of three ingredients: (1) a predicate that has two specific rules, (2) a simple method to find a partial negative modality, and (3) a diagonal lemma that would allow us to let sentences be their partial negative modalities. The recipe shows that all of the following paradoxes share the same structure: The Liar, Curry’s paradox, Validity Curry, Provability Liar, a paradox leading to Löb’s theorem, Knower’s paradox, Knower’s Curry, Grelling-Nelson’s paradox, Russell’s paradox in terms of extensions, alternative liar and alternative Curry, and other new paradoxes.

We conclude the paper by stating the lessons that we can learn from the recipe, and what kind of solutions does the recipe suggest if we want to adhere to the Principle of Uniform Solution.

Minimal change theories of conditionals, the import-export law, and modus ponens

Alessandro Zucchi

Stalnaker’s minimal change semantics for conditionals fails to support the import-export law, according to which (a) and (b) are logically equivalent:

    (a) if A, then if B, then C
    (b) if A and B, then C

However, natural language conditionals seem to abide by the law. McGee (1985) outlines a minimal change semantics for conditionals that supports it. I argue that, in fact, the equivalence between (a) and (b) does not hold unrestrictedly, and I suggest that the facts follow from the interaction between the semantics of conditionals and the ways suppositions may affect the context. I conclude by describing the consequences of my account for the issue of the validity of modus ponens.

Impossibility without impossibilia

Bjørn Jespersen

Circumstantialists already have a logical semantics for impossibilities. They expand their logical space of possible worlds by adding impossible worlds. These are impossible circumstances serving as indices of evaluation, at which impossibilities are true. A variant of circumstantialism, namely modal Meinongianism, adds impossible objects as well. The opposite of circumstantialism, namely structuralism, has some catching-up to do. What might a structuralist logical semantics without impossible worlds or impossible objects look like? This paper makes a structuralist counterproposal. I present a semantics based on a procedural interpretation of the typed l-calculus. The fundamental idea is that talk about impossibilities should be construed in terms of procedures yielding as their product a condition that could not possibly have a satisfier, or else failing to yield a product at all. Dispensing with a ‘bottom’ of impossibilia requires instead a ‘top’ consisting of structured hyperintensions, intensions, intensions defining other intensions, a typed universe, and dual predication. I explain how the theory works by going through a couple of cases.

Bilateralist Truth-Maker Semantics for ST, TS, LP, K3, …

Ulf Hlobil

The talk advocates a marriage of inferentialist bilateralism and truth-maker bilateralism. Inferentialist bilateralists like Restall and Ripley say that a collection of sentences, Y, follows from a collection of sentences, X, iff it is incoherent (or out-of-bounds) to assert all the sentences in X and, at the same time, deny all the sentences in Y. In Fine’s truth-maker theory, we have a partially ordered set of states that exactly verify and falsify sentences, and some of these states are impossible. We can think of making-true as the worldly analogue of asserting, of making-false as the worldly analogue of denying, and of impossibility as the worldly analogue of incoherence. This suggests that we may say that, in truth-maker theory, a collection of sentences, Y, follows (logically) from a collection of sentences, X, iff (in all models) any fusion of exact verifiers of the members of X and exact falsifiers of the member of Y is impossible. Under routine assumptions about truth-making, this yields classical logic. Relaxing one such assumption yields the non-transitive logic ST. Relaxing another assumption yields the non-reflexive logic TS. We can use known facts about the relation between ST, LP, and K3, to provide an interpretation of LP as the logic of falsifiers and K3 as the logic of verifiers. The resulting semantics for ST is more flexible than its usual three-valued semantics because it allows us, e.g., to reject monotonicity. We can also recover fine-grained logics, like Correia’s logic of factual equivalence.

Logic Done as if Inference in Language Mattered

Larry Moss

Our topic is logical inference in natural language, as it is done by people and computers.
The first main topic will be monotonicity inference, arguably the best of the simple ideas
in the area. Monotonicity can be incorporated in running systems whereby one can take
parsed real-life sentences and see simple inferences in action. I will present some of the
theory, related to higher-order monotonicity and the syntax-semantics interface offered by
categorial grammar.

In a different direction, these days monotonicity inference can be done by machines as well
as humans. The talk also discusses this development along with some ongoing work on the
borderline of natural logic and machine learning.

The second direction in the talk will be an overview of the large number of logical systems for
various linguistic phenomena. This work begins as an updating of traditional syllogistic logic,
but with much greater expressive power.

Overall, the goal of the talk is to persuade you that the research program of “natural logic”
leads to a lively research area with connections to many areas both inside and outside of more
mainstream areas of logic.