Abstracts

Divergent potentialism: A modal analysis with an application to choice sequences

Ethan Brauer, Øystein Linnebo, and Stewart Shapiro

Modal logic has recently been used to analyze potential infinity and potentialism more generally. However, this analysis breaks down in cases of divergent possibilities, where the modal logic is weaker than S4.2. This talk has three aims. First, we use the intuitionistic theory of free choice sequences to motivate the need for a modal analysis of divergent potentialism and explain the challenge of connecting the ordinary theory of choice sequences with our modal explication. Then, we use the so-called Beth-Kripke semantics for intuitionistic logic to overcome those challenges. Finally, we apply the resulting modal analysis of divergent potentialism to make choice sequences comprehensible in classical terms.

Computing Perfect Matchings in Graphs

Tyler Markkanen

A matching of a graph is any set of edges in which no two edges share a vertex. Steffens gave a necessary and sufficient condition for countable graphs to have a perfect matching (i.e., a matching that covers all vertices). We analyze the strength of Steffens’ theorem from the viewpoint of computability theory and reverse mathematics. By first restricting to certain kinds of graphs (e.g., graphs with bounded degree and locally finite graphs), we classify some weaker versions of Steffens’ theorem. We then analyze Steffens’ corollary on the existence of maximal matchings, which is critical to his proof of the main theorem. Finally, using methods of Aharoni, Magidor, and Shore, we give a partial result that helps hone in on the computational strength of Steffens’ theorem. Joint with Stephen Flood, Matthew Jura, and Oscar Levin.

Brouwer, Plato, and classification

Sam Sanders

Classification is an essential part of all the exact sciences, including mathematical logic.
The program Reverse Mathematics classifies theorems of ordinary mathematics according
to the minimal axioms needed for a proof. We show that the current scale, based on
comprehension and discontinuous functions, is not satisfactory as it classifies many
intuitively weak statements, like the uncountability of $\mathbb{R}$ or properties of
the Riemann integral, in the same rather strong class. We introduce an alternative/
complimentary scale with better properties based on (classically valid) continuity
axioms from Brouwer’s intuitionistic mathematics. We discuss how these new
results provide empirical support for Platonism.

What Can Theoretical Computer Science Contribute to the Discussion of Consciousness?

Lenore Blum

We propose a mathematical model, which we call the Conscious Turing Machine (CTM), as a formalization of neuroscientist Bernard Baars’ Theater of Consciousness. The CTM is proposed for the express purpose of understanding consciousness. In settling on this model, we look not for complexity but simplicity, not for a complex model of the brain or cognition but a simple mathematical model sufficient to explain consciousness. Our approach, in the spirit of mathematics and theoretical computer science, proposes formal definitions to fix informal notions and deduce consequences. We are inspired by Alan Turing’s extremely simple formal model of computation that is a fundamental first step in the mathematical understanding of computation. This mathematical formalization includes a precise definition of chunk, a precise description of the competition that Long Term Memory (LTM) processors enter to gain access to Short Term Memory (STM)), and a precise definition of conscious awareness in the model. Feedback enables LTM processors to learn from their mistakes and successes and emerging links enable conscious processing to become unconscious. The reasonableness of the formalization lies in the breadth of concepts that the model explains easily and naturally. The model provides some understanding of the Hard Problem of consciousness, which we explore in the particular case of pain and pleasure. The understanding depends on the dynamics of the CTM, not on chemicals like serotonin, dopamine, and so on. We set ourselves the problem of explaining the feeling of consciousness in ways that apply as well to machines made of silicon and gold as to animals made of flesh and blood. With regard to suggestions for AI, the CTM is well suited to giving succinct explanations for whatever high level decisions it makes. This is because the chunk in STM either articulates an explanation or points to chunks that do.

What Problem Did Ladd-Franklin (Think She) Solve(d)?

Sara Uckelman

Christine Ladd-Franklin is often hailed as a guiding star in the history of women in logic—not only did she study under C.S. Peirce and was one of the first women to receive a PhD from Johns Hopkins, she also, according to many modern commentators, solved a logical problem which had plagued the field of syllogisms since Aristotle. In this paper, we revisit this claim, posing and answering two distinct questions: Which
logical problem did Ladd-Franklin solve in her thesis, and which problem did she think she solved? We show that in neither case is the answer “a long-standing problem due to Aristotle”. Instead, what
Ladd-Franklin solved was a problem due to Jevons that was first articulated in the 19th century.

Logical Nihilism

Éno Agolli

Logical nihilism is the view that there is no logic, or more precisely that no single, universal consequence relation governs natural language reasoning. Here, I present three different arguments for logical nihilism from philosophically palatable premises. The first argument comes from a combination of pluralism with the desideratum that logical consequence should be universal, properly understood. The second argument is a slippery slope argument against monists who support weak logical systems on account of their power to characterize a vast range of true theories. The third argument is a general strategy of generating counterexamples to any inference rule, including purportedly fundamental ones such as disjunction introduction. I close by discussing why a truth-conditional approach to the meaning of the logical connectives not only does not force us to reject such counterexamples but also reveals that right truth-conditions are far more general than the classical ones, at the price of nihilism.

Ordering Anything: Rejiggering Linnebo’s Ordinal Abstraction

Eileen Nutting

Øystein Linnebo develops an abstractionist account of the natural numbers as ordinals. On this account, the natural numbers are abstracted from orderings of concrete numerals. But Linnebo also gestures towards an alternative version of his account, on which the restriction to concrete numerals is lifted. I develop something like this alternative account, show how it avoids the Burali-Forti paradox, and show how it guarantees that every number has a successor. Given these and other good features, I claim that Linnebo should prefer this alternative account to the one he develops.

Contextual analysis, epistemic probabilities, and paradoxes

Ehtibar Dzhafarov

Contextual analysis deals with systems of random variables. Each random variable within a system is labeled in two ways: by its content (that which the variable measures or responds to) and by its context (conditions under which it is recorded). Dependence of random variables on contexts is classified into (1) direct (causal) cross-influences and (2) purely contextual (non-causal) influences. The two can be conceptually separated from each other and measured in a principled way. The theory has numerous applications in quantum mechanics, and also in such areas as decision making and computer databases. A system of deterministic variables (as a special case of random variables) is always void of purely contextual influences. There are, however, situations when we know that a system is one of a set of deterministic systems, but we cannot know which one. In such situations we can assign epistemic (Bayesian) probabilities to possible deterministic systems, create thereby a system of epistemic random variables, and subject it to contextual analysis. In this way one can treat, in particular, such logical antinomies as the Liar paradox. The simplest systems of epistemic random variables describing the latter have no direct cross-influences and the maximal possible degree of purely contextual influences.

References:

Kujala, J.V., Dzhafarov, E.N., & Larsson, J.-A. (2015). Necessary and sufficient conditions for extended noncontextuality in a broad class of quantum mechanical systems. Physical Review Letters 115, 150401 (available as arXiv:1407.2886.).

Dzhafarov, E.N., Cervantes, V.H., Kujla, J.V. (2017). Contextuality in canonical systems of random variables. Philosophical Transactions of the Royal Society A 375: 20160389 (available as arXiv:1703.01252).

Cervantes, V.H., & Dzhafarov, E.N. (2018). Snow Queen is evil and beautiful: Experimental evidence for probabilistic contextuality in human choices. Decision 5, 193-204 (available as arXiv:1711.00418).

Core type theory

David Ripley

The Curry-Howard correspondence between intuitionistic logic and the simply-typed lambda calculus forms an important bridge between logical and computational research. This talk develops a variant typed lambda calculus, called “core type theory”, that stands in a similar correspondence to Neil Tennant’s “core logic” (fka “intuitionistic relevant logic”), and shows some basic (and some surprising!) results about this calculus.

Variable Free Semantics: Putting competition effects where they belong

Pauline Jacobsen

This talk will have two parts. First I will discuss the approach to semantics making no use of variable names, indices, or assignment functions that I have advocated in a series of papers (see especially Jacobson, 1999, Linguistics and Philosophy and 2000, Natural Language Semantics, also exposited in Jacobson 2014 textbook Compositional Semantics, OUP). There are a number of theoretical and empirical advantages to this approach, which will be just briefly reviewed. To mention the most obvious theoretical advantage: the standard use of variable names and indices in semantics requires meanings to be relativized to assignment functions (assignments of values to the variable names), adding a layer to the semantic machinery. This program eliminates this and treats all meanings as ‘healthy’ model theoretic objects (the meaning of a pronoun, for example, is simply the identity function on individuals, not a function from assignments to individuals). I will then show a new empirical payoff, which concerns competition effects found in ellipsis constructions. These competition effects have gone under the rubric of MaxElide in the linguistics literature. One example centers on the contrast in (1) (on the reading where each candidates hope is about their own success):

  1. a. Harris is hoping that South Carolina will seal the nomination for her, and Warren is too. (= ‘hoping that it will seal nomination for her (Warren)’)
    b. ?*Harris is hoping that South Carolina will seal the nomination for her, and Warren is also hoping that it will. (= ‘seal the nomination for her (Warren)’)

The ‘standard’ wisdom is there is a constraint in the grammar that when material is ‘missing’ (or, ‘elided’) if a bigger constituent can be elided, the bigger ellipsis is required. Why the grammar should contain such a constraint is a total mystery; moreover I and others have argued elsewhere that grammatical competition constraints represent a real complication in the grammar. When there are competition effects they should be located in speakers and hearers (we know that speakers and hearers do compute alternatives – Gricean reasoning, for example, is based on that assumption). Under the variable free account, the missing material in (1a) is of a different type than that in (1b). In (1a), the listener need only supply the property ‘be an x such that x hopes that SC will seal nomination for x‘ which is the meaning of the VP in the first clause. In (1b) what must be supplied is the 2-place relation ‘seal the nomination for’ (note that this is in part because the pronoun her in the first clause is not a variable, and so [[seal the nomination for her]] is the function from an individual x to the property of sealing the nomination for x, which in turn is the two place relation named above. The competition effect is thus about types not size, and can be given a plausible explanation in terms of communicative pressures. Assuming that meanings of more complex types are more difficult to access than those of simpler types, there is a pressure for speakers to choose the simpler type ellipsis. The type competition story crucially relies on the claim that expressions containing pronouns unbound within them denote functions from individuals to something rather than functions from assignment functions.