Author: Damir Dzhafarov

Towards a diversified understanding of computability

Liesbeth De Mol

In this talk I will argue that we should care more for and be more careful with the history of computability making a plea for a more diverse and informed understanding. The starting point will be the much celebrated Turing machine model. Why is it that within the computability community, this model is often considered as the model? In the first part of this talk I review some of those reasons, showing how and why they are in need of a revision based, mostly, on historical arguments. On that basis I argue that, while surely, the Turing machine model is a basic one, part of its supposed superiority over other models is based on socio-historical forces. In part II then, I consider a number of historical, philosophical and technical arguments to support and elaborate the idea of a more diversified understanding of the history of computability. Central to those arguments will be the differentiation between, on the one hand, the logical equivalence between the different models with respect to the computable functions, and, on the other hand, some basic intensional differences between those very same models. To keep the argument clear, the main focus will be on the different models provided by Emil Leon Post but I will also include references to the work by Alonzo Church, Stephen C. Kleene and Haskell B. Curry.

Supported by the PROGRAMme project, ANR-17-CE38-0003-01.

Polynomial-time axioms of choice and polynomial-time cardinality

Josh Grochow

Many versions of the Axiom of Choice (AC), though equivalent in ZF set theory, are inequivalent from the computational point of view. When we consider polynomial-time analogues of AC, many of these different versions can be shown to be equivalent to other more standard questions about the relationship between complexity classes. We will use some of these formulations of AC to motivate several complexity questions that might otherwise seem a bit bespoke and unrelated from one another.

Next, as many versions of AC are about cardinals, in the second half of the talk we introduce a polynomial-time version of cardinality, in the spirit of polynomial-time model theory. As this is a new theory, we will discuss some of the foundational properties of polynomial-time cardinality, some of which may be surprising when contrasted with their set-theoretic counterparts. The talk will contain many open questions, and the paper contains even more! Based on arXiv:2301.07123 [cs.CC].

Interacting alternatives: referential indeterminacy and questions

Floris Roelofsen

One of the major challenges involved in developing semantic theories is that many constructions in natural language given rise to alternatives. Different sources of alternatives have been identified—e.g., questions, indeterminacy, focus, scalarity—and have been investigated in quite some depth. Less attention, however, has been given so far to the question how these different kinds of alternatives interact. I will focus in this talk one one such interaction, namely between referential indeterminacy and questions. Several formal semantic frameworks have been developed to capture referential indeterminacy (dynamic semantics, alternative semantics) and the content of questions (e.g., alternative semantics, structured meanings, partition semantics, inquisitive semantics). I will report on ongoing work with Jakub Dotlacil, which aims to merge dynamic and inquisitive semantics in a principled way. I will present a basic system and suggest some potential applications and extensions.

Logic Done as if Inference in Language Mattered

Larry Moss

Our topic is logical inference in natural language, as it is done by people and computers.
The first main topic will be monotonicity inference, arguably the best of the simple ideas
in the area. Monotonicity can be incorporated in running systems whereby one can take
parsed real-life sentences and see simple inferences in action. I will present some of the
theory, related to higher-order monotonicity and the syntax-semantics interface offered by
categorial grammar.

In a different direction, these days monotonicity inference can be done by machines as well
as humans. The talk also discusses this development along with some ongoing work on the
borderline of natural logic and machine learning.

The second direction in the talk will be an overview of the large number of logical systems for
various linguistic phenomena. This work begins as an updating of traditional syllogistic logic,
but with much greater expressive power.

Overall, the goal of the talk is to persuade you that the research program of “natural logic”
leads to a lively research area with connections to many areas both inside and outside of more
mainstream areas of logic.

Contextual analysis, epistemic probabilities, and paradoxes

Ehtibar Dzhafarov

Contextual analysis deals with systems of random variables. Each random variable within a system is labeled in two ways: by its content (that which the variable measures or responds to) and by its context (conditions under which it is recorded). Dependence of random variables on contexts is classified into (1) direct (causal) cross-influences and (2) purely contextual (non-causal) influences. The two can be conceptually separated from each other and measured in a principled way. The theory has numerous applications in quantum mechanics, and also in such areas as decision making and computer databases. A system of deterministic variables (as a special case of random variables) is always void of purely contextual influences. There are, however, situations when we know that a system is one of a set of deterministic systems, but we cannot know which one. In such situations we can assign epistemic (Bayesian) probabilities to possible deterministic systems, create thereby a system of epistemic random variables, and subject it to contextual analysis. In this way one can treat, in particular, such logical antinomies as the Liar paradox. The simplest systems of epistemic random variables describing the latter have no direct cross-influences and the maximal possible degree of purely contextual influences.

Brouwer, Plato, and classification

Sam Sanders

Classification is an essential part of all the exact sciences, including mathematical logic.
The program Reverse Mathematics classifies theorems of ordinary mathematics according
to the minimal axioms needed for a proof. We show that the current scale, based on
comprehension and discontinuous functions, is not satisfactory as it classifies many
intuitively weak statements, like the uncountability of $\mathbb{R}$ or properties of
the Riemann integral, in the same rather strong class. We introduce an alternative/
complimentary scale with better properties based on (classically valid) continuity
axioms from Brouwer’s intuitionistic mathematics. We discuss how these new
results provide empirical support for Platonism.