The goal of this talk is to use the sorites paradox to illustrate the methodology of “reverse philosophy”—i.e. the application of methods from reverse mathematics to study the mathematical involvement of recognized arguments in analytic philosophy. After briefly motivating such a program, I will focus on the following: 1) the role of measurement and representation theorems in the linguistic formulation of various forms of the sorites; 2) the role of a weak form of Hölder’s Theorem in the formulation of the conditional sorites for predicates such as “tall”; 3) the role of a stronger form of Hölder’s Theorem in the formulation of the so-called continuous sorites for predicates such as “red” of Weber & Colyvan 2010/Weber 2021. Contrasts will be drawn between the constructivity of the weaker form (as observed by Krantz 1968 and formalized in RCA_0 by Solomon 1998) and the non-constructivity of the latter form (due to its apparent dependence on Arithmetical Comprehension).
I’ll propose an application to logic of the “neopragmatist” program. Neopragmatists argue that inquiry into the nature of what we think and talk about can be fruitfully replaced by inquiry into the functions of concepts and expressions. Logical vocabulary can serve as a particular target for neopragmatist theorizing, but it has also been taken to pose obstacles to neopragmatist accounts more generally. I’ll argue that a neopragmatist approach to logical relations (such as logical consequence), as well as to ascriptions of content, undermines two constraints on neopragmatist accounts of logical connectives (such as “and”, “or”, and “not”). Freed from these constraints, I’ll sketch a simple version of such an account, on which logical connectives express dialectical attitudes. The resulting approach is deflationary in two ways: it’s based on deflationism about logical relations and it aims to deflate some of neopragmatists’ usual theoretical ambitions.
Among the numerous logics that are studied and applied nowadays, many can be described via a proof system, which captures the valid reasoning that can be carried out in the logic, and there are many examples of well-known concrete proof systems that occur in various areas in logic and beyond.
As is well-known, a single logic has many different proof systems, and not every system is as useful as another. Given the goal of the proof-theoretic formalization, be it philosophical or mathematical or of a different nature, one can often distinguish among the proof systems of the logic a class of good proof systems, proof systems with properties that makes them useful in the investigation of the logic. A well-known example are analytic sequent calculi and their use in proving properties of the logic such as decidability or interpolation. But there are many other examples, some of which will be discussed during the talk.
Given a class of such useful proof systems the questions naturally arise whether a certain logic has such a good proof system or not, and what it means for a logic to have such proof systems. These are no easy questions to answer and stated in such generality they may not always have a reasonable answer. But for concrete classes of logics and proof systems something insightful can be said. In this talk the aim is to do so for intermediate and (intuitionistic) modal logics, where the proof systems that are considered are sequent calculi. And although no full answer is obtained, some small steps in that direction are taken.
Stanisław Jaśkowski is known to be one of the modern founders of paraconsistent logic, together with Newton C. A. da Costa. The most important contribution of Jaśkowski is that he clearly distinguished two notions for a theory, namely a theory being contradictory (or inconsistent) and a theory being trivial (or overfilled). In addition to this distinction, he also presented a system of paraconsistent logic known as D2 which is often referred to as discursive logic or discussive logic. Very briefly put, D2 was introduced via modal logic S5, building on a certain idea related to discussion, and seen as a typical non-adjunctive system of paraconsistent logic.
The aim of this talk is to address the following three questions:
(i) Are there other modal logics than S5 that will be sufficient to define D2?
(ii) Are there other ways to capture Jaśkowski’s idea than the well-known translation?
(iii) Is there more to discussive logic than being non-adjunctive?
These questions are, of course, not entirely new. In particular, the first question has led to a number of interesting and non-trivial results. However, there seem to be other answers than those already discussed in the literature, and I will present some new answers to the above questions.
(The results related to the first and the second questions build on joint work with Fabio De Martin Polo and Igor Sedlár, respectively.)
How the Standard View of Rigor and the Standard Practice of Mathematics Clash
Mathematical proofs are rigorous – it’s part of what distinguishes proofs from other argument types. But the quality of rigor, relatively simple for the trained mathematician to spot, is difficult to explicate. The most common view, often referred to as the standard view of rigor, is that “a mathematical proof is rigorous iff it can be converted into a formal derivation” (Burgess & De Toffoli 2022). Each proponent of the standard view interprets “conversion” differently. For some, like Hamami (2022), conversion means algorithmic translation while others, like Burgess (2015), interpret it as just revealing enough steps of the formal derivation.
In this talk, I aim to present an overarching concern for the standard view. I’ll argue that no extant version of the standard view makes sense of how mathematicians make rigor judgments. Both Hamami (2022) and Tatton-Brown (2021) have both attempted to account for mathematicians’ rigor judgments using the standard view. I’ll argue that both still fail to adequately account for mathematical practice by positing that mathematicians engage in either algorithmic proof search and/or extensive training in formal rigor.
We seem to be left with two options: continue trying to amend the standard view or introduce a new account of rigor which is practice-focused. I’ll argue that issues with the two accounts are general issues that will likely occur for future formulations of the standard view. Thus, we should aim to introduce a new account of informal, mathematical rigor. I’ll close by sketching a new account of rigor related to the audience-based view of proof introduced in Ashton (2021).
Salvatore Florio, Stewart Shapiro, and Eric Snyder
It is widely (but not universally) held that logical consequence is determined (at least in part) by the meanings of the logical terminology. One might think that this is an empirical claim that can be tested by the usual methods of linguistic semantics. Yet most philosophers who hold views about logic like this do not engage in empirical research to test the main thesis. Sometimes the thesis is just stated, without argument, and sometimes it is argued for on a priori grounds. Moreover, many linguistic studies of words like “or”, the conditional, and the quantifiers run directly contrary to the thesis in question.
From the other direction, much of the work in linguistic semantics uses logical symbols. For example, it is typical for a semanticist to write a biconditional, in a formal language, whose left hand side has a symbol for the meaning of an expression in natural language and whose right hand side is a formula consisting of lambda-terms and other symbols from standard logic works: quantifiers ∀, ∃, and connectives ¬, →, ∧, ∨, ↔. This enterprise thus seems to presuppose that readers already understand the formal logical symbols, and the semanticist uses this understanding to shed light on the meanings of expressions in natural language. This occurs even if the natural language expressions are natural language terms corresponding to the logical ones: “or”, “not”, “forall”, and the like.
The purpose of this talk is to explore the relation between logic and the practice of empirical semantics, hoping to shed light, in some way, on both enterprises.
In the mid-1970s, Gregory Chaitin proved a novel incompleteness theorem, formulated in terms of Kolmogorov complexity, a measure of complexity that features prominently in algorithmic information theory. Chaitin further claimed that his theorem provides insight into both the source and scope of the incompleteness phenomenon, a claim that has been subject to much criticism. In this talk, I consider a new strategy for vindicating Chaitin’s claims, one informed by recent work of Bienvenu, Romashchenko, Shen, Taveneaux, and Vermeeren that extends and refines Chaitin’s incompleteness theorem. As I argue, this strategy, though more promising than previous attempts, fails to vindicate Chaitin’s claims. Lastly, I will suggest an alternative interpretation of Chaitin’s theorem, according to which the theorem indicates a trade-off that comes from working with a sufficiently strong definition of randomness—namely, that we lose the ability to certify randomness.
Cooperation and Determining When Merely Verbal Disputes are Worthwhile
Teresa Kouri Kissel
Merely verbal disputes are often thought to be disputes about language that are not worthwhile. Recent work suggests that some merely verbal disputes may not be problematic in this way (see, for example, Balcerak Jackson (2014), Belleri (2020) and Mankowitz (2021)). In this paper, I propose that this recent work misses one crucial point: interlocutors have to cooperate in the right kinds of ways in order for a dispute to be worthwhile. Using the question-under-discussion framework developed in Roberts (2012), I provide a form of cooperation which I show can distinguish between merely verbal disputes which are and are not worthwhile.
If this paper is correct that sometimes what makes disputes about language worthwhile is that the interlocutors are willing to cooperate in the right kinds of ways, then there is a critical upshot: interlocutors can control whether their dispute is worth their time. That is, if interlocutors decide to treat what each other is saying as true for the purposes of the conversation, or if they manage to come to some compromise about some things they are both willing to accept as true, then they can go from having a worthless dispute to having a worthwhile one.
Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. One can implement the deflationist insight in the pragmatist’s theory of content by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. There are two upshots. First, a new diagnosis of the Liar, Revenges and attendant paradoxes: the paradoxes require that truth rules preserve evidence, but they only preserve commitment. Second, one straightforwardly obtains axiomatisations of several supervaluational hierarchies, answering the question of how such theories are to be naturally axiomatised. This is joint work with Luca Incurvati (Amsterdam).