Author: Damir Dzhafarov

Meaning in Mathematics: a folkloric account

Ainsley May

Current accounts of meaning in mathematics face a dilemma between triviality and over-specificity. On the one hand, intensional accounts of meaning such as possible world semantics give the trivial result that every mathematical theorem has the same meaning since they are all necessarily true. This triviality is unsatisfactory because we clearly hold some mathematical theorems have different meanings from others. On the other hand, hyperintensional accounts like impossible worlds and structured propositions allow us to distinguish between necessary truths. However, they are so fine-grained that it becomes difficult to uniformly identify the salient semantic features.

In response to this dilemma, I propose an account of mathematical meaning called the folkloric account. On the folkloric account the content of a mathematical theorem is the collection of models, within some reference class of models, that make the theorem true. The appeal of this account is partly that it retains central aspects of world-based accounts, such as evaluation within a model. Yet it overcomes their limitations by incorporating more models to represent different mathematical theories and structures without allowing absolutely every such structure. Here, I introduce the folkloric account and use examples to highlight some of its strengths and identify weaknesses to address in future research.

Is the consistency operator canonical?

James Walsh

It is a well-known empirical phenomenon that natural axiomatic theories are well-ordered by consistency strength. The restriction to natural theories is necessary; using ad-hoc techniques (such as self-reference and Rosser orderings) one can exhibit non-linearity and ill-foundedness in the consistency strength hierarchy. What explains the contrast between natural theories and axiomatic theories in general?

Our approach to this problem is inspired by work on an analogous problem in recursion theory. The natural Turing degrees (0,0′,…,Kleene’s O,…,0#,…) are well-ordered by Turing reducibility, yet the Turing degrees in general are neither linearly ordered nor well-founded, as ad-hoc techniques (such as the priority method) bear out. Martin’s Conjecture, which is still unresolved, is a proposed explanation for this phenomenon. In particular, Martin’s Conjecture specifies a way in which the Turing jump is canonical.

After discussing Martin’s Conjecture, we will formulate analogous proof-theoretic hypotheses according to which the consistency operator is canonical. We will then discuss results—both positive and negative—within this framework. Some of these results were obtained jointly with Antonio Montalbán.

Reductions between problems in reverse math and computability

Denis Hirschfeldt

Many mathematical principles can be stated in the form “for all X such that C(X) holds, there is a Y such that D(X,Y) holds”, where X and Y range over second-order objects, and C and D are arithmetic conditions. We can think of such a principle as a problem, where an instance of the problem is an X such that C(X) holds, and a solution to this instance is a Y such that D(X,Y) holds. I will discuss notions of reducibility between such problems coming from the closely-related perspectives of reverse mathematics and computability theory.

Towards a diversified understanding of computability

Liesbeth De Mol

In this talk I will argue that we should care more for and be more careful with the history of computability making a plea for a more diverse and informed understanding. The starting point will be the much celebrated Turing machine model. Why is it that within the computability community, this model is often considered as the model? In the first part of this talk I review some of those reasons, showing how and why they are in need of a revision based, mostly, on historical arguments. On that basis I argue that, while surely, the Turing machine model is a basic one, part of its supposed superiority over other models is based on socio-historical forces. In part II then, I consider a number of historical, philosophical and technical arguments to support and elaborate the idea of a more diversified understanding of the history of computability. Central to those arguments will be the differentiation between, on the one hand, the logical equivalence between the different models with respect to the computable functions, and, on the other hand, some basic intensional differences between those very same models. To keep the argument clear, the main focus will be on the different models provided by Emil Leon Post but I will also include references to the work by Alonzo Church, Stephen C. Kleene and Haskell B. Curry.

Supported by the PROGRAMme project, ANR-17-CE38-0003-01.

Polynomial-time axioms of choice and polynomial-time cardinality

Josh Grochow

Many versions of the Axiom of Choice (AC), though equivalent in ZF set theory, are inequivalent from the computational point of view. When we consider polynomial-time analogues of AC, many of these different versions can be shown to be equivalent to other more standard questions about the relationship between complexity classes. We will use some of these formulations of AC to motivate several complexity questions that might otherwise seem a bit bespoke and unrelated from one another.

Next, as many versions of AC are about cardinals, in the second half of the talk we introduce a polynomial-time version of cardinality, in the spirit of polynomial-time model theory. As this is a new theory, we will discuss some of the foundational properties of polynomial-time cardinality, some of which may be surprising when contrasted with their set-theoretic counterparts. The talk will contain many open questions, and the paper contains even more! Based on arXiv:2301.07123 [cs.CC].

Interacting alternatives: referential indeterminacy and questions

Floris Roelofsen

One of the major challenges involved in developing semantic theories is that many constructions in natural language given rise to alternatives. Different sources of alternatives have been identified—e.g., questions, indeterminacy, focus, scalarity—and have been investigated in quite some depth. Less attention, however, has been given so far to the question how these different kinds of alternatives interact. I will focus in this talk one one such interaction, namely between referential indeterminacy and questions. Several formal semantic frameworks have been developed to capture referential indeterminacy (dynamic semantics, alternative semantics) and the content of questions (e.g., alternative semantics, structured meanings, partition semantics, inquisitive semantics). I will report on ongoing work with Jakub Dotlacil, which aims to merge dynamic and inquisitive semantics in a principled way. I will present a basic system and suggest some potential applications and extensions.

Logic Done as if Inference in Language Mattered

Larry Moss

Our topic is logical inference in natural language, as it is done by people and computers.
The first main topic will be monotonicity inference, arguably the best of the simple ideas
in the area. Monotonicity can be incorporated in running systems whereby one can take
parsed real-life sentences and see simple inferences in action. I will present some of the
theory, related to higher-order monotonicity and the syntax-semantics interface offered by
categorial grammar.

In a different direction, these days monotonicity inference can be done by machines as well
as humans. The talk also discusses this development along with some ongoing work on the
borderline of natural logic and machine learning.

The second direction in the talk will be an overview of the large number of logical systems for
various linguistic phenomena. This work begins as an updating of traditional syllogistic logic,
but with much greater expressive power.

Overall, the goal of the talk is to persuade you that the research program of “natural logic”
leads to a lively research area with connections to many areas both inside and outside of more
mainstream areas of logic.