Author: Marcus Rossberg

How the Standard View of Rigor and the Standard Practice…

How the Standard View of Rigor and the Standard Practice of Mathematics Clash

Zoe Ashton

Mathematical proofs are rigorous – it’s part of what distinguishes proofs from other argument types. But the quality of rigor, relatively simple for the trained mathematician to spot, is difficult to explicate. The most common view, often referred to as the standard view of rigor, is that “a mathematical proof is rigorous iff it can be converted into a formal derivation” (Burgess & De Toffoli 2022). Each proponent of the standard view interprets “conversion” differently. For some, like Hamami (2022), conversion means algorithmic translation while others, like Burgess (2015), interpret it as just revealing enough steps of the formal derivation.

In this talk, I aim to present an overarching concern for the standard view. I’ll argue that no extant version of the standard view makes sense of how mathematicians make rigor judgments. Both Hamami (2022) and Tatton-Brown (2021) have both attempted to account for mathematicians’ rigor judgments using the standard view. I’ll argue that both still fail to adequately account for mathematical practice by positing that mathematicians engage in either algorithmic proof search and/or extensive training in formal rigor.

We seem to be left with two options: continue trying to amend the standard view or introduce a new account of rigor which is practice-focused. I’ll argue that issues with the two accounts are general issues that will likely occur for future formulations of the standard view. Thus, we should aim to introduce a new account of informal, mathematical rigor. I’ll close by sketching a new account of rigor related to the audience-based view of proof introduced in Ashton (2021).

Semantics and logic: the meaning of logical terms

Salvatore Florio, Stewart Shapiro, and Eric Snyder

It is widely (but not universally) held that logical consequence is determined (at least in part) by the meanings of the logical terminology. One might think that this is an empirical claim that can be tested by the usual methods of linguistic semantics. Yet most philosophers who hold views about logic like this do not engage in empirical research to test the main thesis. Sometimes the thesis is just stated, without argument, and sometimes it is argued for on a priori grounds. Moreover, many linguistic studies of words like “or”, the conditional, and the quantifiers run directly contrary to the thesis in question.

From the other direction, much of the work in linguistic semantics uses logical symbols. For example, it is typical for a semanticist to write a biconditional, in a formal language, whose left hand side has a symbol for the meaning of an expression in natural language and whose right hand side is a formula consisting of lambda-terms and other symbols from standard logic works: quantifiers ∀, ∃, and connectives ¬, →, ∧, ∨, ↔. This enterprise thus seems to presuppose that readers already understand the formal logical symbols, and the semanticist uses this understanding to shed light on the meanings of expressions in natural language. This occurs even if the natural language expressions are natural language terms corresponding to the logical ones: “or”, “not”, “forall”, and the like.

The purpose of this talk is to explore the relation between logic and the practice of empirical semantics, hoping to shed light, in some way, on both enterprises.

Revisiting Chaitin’s Incompleteness Theorem

Christopher Porter

In the mid-1970s, Gregory Chaitin proved a novel incompleteness theorem, formulated in terms of Kolmogorov complexity, a measure of complexity that features prominently in algorithmic information theory. Chaitin further claimed that his theorem provides insight into both the source and scope of the incompleteness phenomenon, a claim that has been subject to much criticism. In this talk, I consider a new strategy for vindicating Chaitin’s claims, one informed by recent work of Bienvenu, Romashchenko, Shen, Taveneaux, and Vermeeren that extends and refines Chaitin’s incompleteness theorem. As I argue, this strategy, though more promising than previous attempts, fails to vindicate Chaitin’s claims. Lastly, I will suggest an alternative interpretation of Chaitin’s theorem, according to which the theorem indicates a trade-off that comes from working with a sufficiently strong definition of randomness—namely, that we lose the ability to certify randomness.

Cooperation & Determining When Merely Verbal Disputes…

Cooperation and Determining When Merely Verbal Disputes are Worthwhile

Teresa Kouri Kissel

Merely verbal disputes are often thought to be disputes about language that are not worthwhile. Recent work suggests that some merely verbal disputes may not be problematic in this way (see, for example, Balcerak Jackson (2014), Belleri (2020) and Mankowitz (2021)). In this paper, I propose that this recent work misses one crucial point: interlocutors have to cooperate in the right kinds of ways in order for a dispute to be worthwhile. Using the question-under-discussion framework developed in Roberts (2012), I provide a form of cooperation which I show can distinguish between merely verbal disputes which are and are not worthwhile.

If this paper is correct that sometimes what makes disputes about language worthwhile is that the interlocutors are willing to cooperate in the right kinds of ways, then there is a critical upshot: interlocutors can control whether their dispute is worth their time. That is, if interlocutors decide to treat what each other is saying as true for the purposes of the conversation, or if they manage to come to some compromise about some things they are both willing to accept as true, then they can go from having a worthless dispute to having a worthwhile one.

Neo-Pragmatist Truth and Supervaluationism

Julian Schlöder

Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. One can implement the deflationist insight in the pragmatist’s theory of content by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. There are two upshots. First, a new diagnosis of the Liar, Revenges and attendant paradoxes: the paradoxes require that truth rules preserve evidence, but they only preserve commitment. Second, one straightforwardly obtains axiomatisations of several supervaluational hierarchies, answering the question of how such theories are to be naturally axiomatised. This is joint work with Luca Incurvati (Amsterdam).

Definitional equivalence and plural logic

Salvatore Florio, Stewart Shapiro, and Eric Snyder

Atomistic classical mereology and plural logic provide two alternative frameworks for the analysis of plurals in natural language. It is a matter of dispute which framework is preferable. From the formal point of view, however, the two frameworks can be shown to be definitionally equivalent. So they have the same coverage as each other: there is a range of data that they both capture correctly and a range of data that they both fail to capture or get wrong. We argue that the tie is broken when we consider a wider range of linguistic phenomena, such as mass nouns and group nouns. Mereology is more flexible than plural logic and is thus more easily adapted to account for these richer fragments of natural language.

On sequent calculi for Classical Logic where Cut is admissible

Damian Szmuc

The aim of this talk is to examine the class of Gentzen-style sequent calculi where Cut is admissible but not derivable that prove all the (finite) inferences that are usually taken to characterize Classical Logic—conceived with conjunctively-read multiple premises and disjunctively-read multiple conclusions. We’ll do this starting from two different calculi, both counting with Identity and the Weakening rules in unrestricted form. First, we’ll start with the usual introduction rules and consider what expansions thereof are appropriate. Second, we’ll start with the usual elimination or inverted rules and consider what expansions thereof are appropriate. Expansions, in each case, may or may not consist of additional standard or non-standard introduction or elimination rules, as well as of restricted forms of Cut.

Revenge

Julien Murzi

Murzi & Rossi (2020) put forward a recipe for generating revenge arguments against any non-classical theory of semantic notions that can recapture classical logic for a set of sentences X provided X is closed under certain classical-recapturing principles. More precisely, Murzi & Rossi show that no such theory can be non-trivially closed under natural principles for paradoxicality and unparadoxicality.

In a recent paper, Lucas Rosenblatt objects that Murzi & Rossi’s principles are not so natural, and that non-classical theories can express perfectly adequate, and yet unparadoxical, notions of paradoxicality.

I argue that Rosenblatt’s strategy effectively amounts to fragmenting the notion of paradoxicality, much in the way Tarski’s treatment of the paradoxes fragments the notion of truth. Along the way, I discuss a different way of resisting Murzi & Rossi’s revenge argument, due to Luca Incurvati and Julian Schlöder, that doesn’t fragment the notion of paradoxicality, but that effectively bans paradoxical instances of semantic rules within subproofs, on the assumption that they are not evidence-preserving.

Semantics of First-order Logic: The Early Years

Richard Zach

The model and proof theory of classical first-order logic are a staple of introductory logic courses: we have nice proof systems, well-understood notions of models, validity, and consequence, and a proof of completeness. The story of how these were developed in the 1920s, 30s, and even 40s usually consists in simply a list of results and who obtained them when. What happened behind the scenes is much less well known. The talk will fill in some of that back story and show how philosophical, methodological, and practical considerations shaped the development of the conceptual framework and the direction of research in these formative decades. Specifically, I’ll discuss how the work of Hilbert and his students (Behmann, Schönfinkel, Bernays, and Ackermann) on the decision problem in the 1920s led from an almost entirely syntactic approach to logic to the development of first-order semantics that made the completeness theorem possible.