Abstracts

A More Unified Approach to Free Logics

Edi Pavlović

(joint work with Norbert Gratzl, MCMP, Munich)

Free logics is a family of first-order logics which came about as a result of examining the existence assumptions of classical logic. What those assumptions are varies, but the central ones are that (i) the domain of interpretation is not empty, (ii) every name denotes exactly one object in the domain and (iii) the quantifiers have existential import. Free logics usually reject the claim that names need to denote in (ii), and of the systems considered in this paper, the positive free logic concedes that some atomic formulas containing non-denoting names (including self-identity) are true, while negative free logic rejects even the latter claim.

These logics have complex and varied axiomatizations and semantics, and the goal of the present work is to offer an orderly examination of the various systems and their mutual relations. This is done by first offering a formalization, using sequent calculi which possess all the desired structural properties of a good proof system, including admissibility of contraction and cut, while streamlining free logics in a way no other approach has. We then present a simple and unified system of generalized semantics, which allows for a straightforward demonstration of the meta-theoretical properties, while also offering insights into the relationship between different logics (free and classical). Finally, we extend the system with modalities by using a labeled sequent calculus, and here we are again able to map out the different approaches and their mutual relations using the same framework.

Contextual analysis, epistemic probabilities, and paradoxes

Ehtibar Dzhafarov

Contextual analysis deals with systems of random variables. Each random variable within a system is labeled in two ways: by its content (that which the variable measures or responds to) and by its context (conditions under which it is recorded). Dependence of random variables on contexts is classified into (1) direct (causal) cross-influences and (2) purely contextual (non-causal) influences. The two can be conceptually separated from each other and measured in a principled way. The theory has numerous applications in quantum mechanics, and also in such areas as decision making and computer databases. A system of deterministic variables (as a special case of random variables) is always void of purely contextual influences. There are, however, situations when we know that a system is one of a set of deterministic systems, but we cannot know which one. In such situations we can assign epistemic (Bayesian) probabilities to possible deterministic systems, create thereby a system of epistemic random variables, and subject it to contextual analysis. In this way one can treat, in particular, such logical antinomies as the Liar paradox. The simplest systems of epistemic random variables describing the latter have no direct cross-influences and the maximal possible degree of purely contextual influences.

An Epistemic Bridge for Presupposition Projection in Questions

Nadine Theiler

Semantic presuppositions are certain inferences associated with words or linguistic constructions. For example, if someone tells you that they “recently started doing yoga”, then this presupposes that they didn’t do yoga before.

A problem that has occupied semanticists for decades is how the presuppositions of a complex sentence can be computed from the presuppositions of its parts. Another way of putting this problem is, how do presuppositions project in various environments?

In this talk, I will discuss presupposition projection in one particular linguistic environment, namely in questions, arguing that it should be treated pragmatically. I will motivate a generalized version of Stalnaker’s bridge principle and show that it makes correct predictions for a range of different interrogative forms and different question uses.

Intermediate Grammaticality

Sandra Villata

Formal theories of grammar and traditional sentence processing models start from the assumption that the grammar is a system of rules. In such a system, only binary outcomes are generated: a sentence is well-formed if it follows the rules of the grammar and ill-formed otherwise. This dichotomous grammatical system faces a critical challenge, namely accounting for the intermediate/gradient modulations observable in experimental measures (e.g., sentences receive gradient acceptability judgments, speakers report a gradient ability to comprehend sentences that deviate from idealized grammatical forms, and various online sentence processing measures yield gradient effects). This challenge is traditionally met by accounting for gradient effects in terms of extra-grammatical factors (e.g., working memory limitations, reanalysis, semantics), which intervene after the syntactic module generates its output. As a test case, in this talk I will focus on a specific kind of violation that is at the core of the linguistic investigation: islands, a family of encapsulated syntactic domains that seem to prohibit the establishment of syntactic dependencies inside of them (Ross 1967). Islands are interesting because, although most linguistic theories treat them as fully ungrammatical and uninterpretable, I will present experimental evidence revealing gradient patterns of acceptability and evidence that some island violations are interpretable. To account for these gradient data, in this talk I explore the consequences of assuming a more flexible rule-based system, where sentential elements can be coerced, under specific circumstances, to play a role that does not fully fit them. In this system, unlike traditional ones, structure formation is forced even under sub-optimal circumstances, which generates semi-grammatical structures in a continuous grammar.

Divergent potentialism: A modal analysis with an application to choice sequences

Ethan Brauer, Øystein Linnebo, and Stewart Shapiro

Modal logic has recently been used to analyze potential infinity and potentialism more generally. However, this analysis breaks down in cases of divergent possibilities, where the modal logic is weaker than S4.2. This talk has three aims. First, we use the intuitionistic theory of free choice sequences to motivate the need for a modal analysis of divergent potentialism and explain the challenge of connecting the ordinary theory of choice sequences with our modal explication. Then, we use the so-called Beth-Kripke semantics for intuitionistic logic to overcome those challenges. Finally, we apply the resulting modal analysis of divergent potentialism to make choice sequences comprehensible in classical terms.

Computing Perfect Matchings in Graphs

Tyler Markkanen

A matching of a graph is any set of edges in which no two edges share a vertex. Steffens gave a necessary and sufficient condition for countable graphs to have a perfect matching (i.e., a matching that covers all vertices). We analyze the strength of Steffens’ theorem from the viewpoint of computability theory and reverse mathematics. By first restricting to certain kinds of graphs (e.g., graphs with bounded degree and locally finite graphs), we classify some weaker versions of Steffens’ theorem. We then analyze Steffens’ corollary on the existence of maximal matchings, which is critical to his proof of the main theorem. Finally, using methods of Aharoni, Magidor, and Shore, we give a partial result that helps hone in on the computational strength of Steffens’ theorem. Joint with Stephen Flood, Matthew Jura, and Oscar Levin.

Brouwer, Plato, and classification

Sam Sanders

Classification is an essential part of all the exact sciences, including mathematical logic.
The program Reverse Mathematics classifies theorems of ordinary mathematics according
to the minimal axioms needed for a proof. We show that the current scale, based on
comprehension and discontinuous functions, is not satisfactory as it classifies many
intuitively weak statements, like the uncountability of $\mathbb{R}$ or properties of
the Riemann integral, in the same rather strong class. We introduce an alternative/
complimentary scale with better properties based on (classically valid) continuity
axioms from Brouwer’s intuitionistic mathematics. We discuss how these new
results provide empirical support for Platonism.

What Can Theoretical Computer Science Contribute to the Discussion of Consciousness?

Lenore Blum

We propose a mathematical model, which we call the Conscious Turing Machine (CTM), as a formalization of neuroscientist Bernard Baars’ Theater of Consciousness. The CTM is proposed for the express purpose of understanding consciousness. In settling on this model, we look not for complexity but simplicity, not for a complex model of the brain or cognition but a simple mathematical model sufficient to explain consciousness. Our approach, in the spirit of mathematics and theoretical computer science, proposes formal definitions to fix informal notions and deduce consequences. We are inspired by Alan Turing’s extremely simple formal model of computation that is a fundamental first step in the mathematical understanding of computation. This mathematical formalization includes a precise definition of chunk, a precise description of the competition that Long Term Memory (LTM) processors enter to gain access to Short Term Memory (STM)), and a precise definition of conscious awareness in the model. Feedback enables LTM processors to learn from their mistakes and successes and emerging links enable conscious processing to become unconscious. The reasonableness of the formalization lies in the breadth of concepts that the model explains easily and naturally. The model provides some understanding of the Hard Problem of consciousness, which we explore in the particular case of pain and pleasure. The understanding depends on the dynamics of the CTM, not on chemicals like serotonin, dopamine, and so on. We set ourselves the problem of explaining the feeling of consciousness in ways that apply as well to machines made of silicon and gold as to animals made of flesh and blood. With regard to suggestions for AI, the CTM is well suited to giving succinct explanations for whatever high level decisions it makes. This is because the chunk in STM either articulates an explanation or points to chunks that do.