Abstracts

Is Functionalism Inconsistent?

Owain Griffin (OSU)

Starting with Bealer (1978), some authors have claimed that Beth-style definability results show functionalism about the mind to be inconsistent. If these arguments go through, then the Beth result provides a way of collapsing functionalism into reductionism – exactly what functionalists purport to deny. While this has received discussion in the literature (See Hellman & Thompson (1975), Block (1980), Tennant (1985)) it has recently been resuscitated and refined by Halvorson (2019). In this paper, we question the argument’s accuracy, and propose a new objection to it. We claim that in order to derive its conclusion, the argument relies fundamentally on equivocations concerning the notion of definability.

The logic of sequences

Cian Dorr and Matthew Mandelkern (NYU)

In the course of proving a tenability result about the probabilities of conditionals, van Fraassen (1976) introduced a semantics for conditionals based on sequences of worlds, representing a particularly simple special case of ordering semantics for conditionals. According to sequence semantics, ‘If p, then q’ is true at a sequence just in case either q is true at the first truncation of the sequence where p is true, or there is no truncation where p is true. This approach has become increasingly popular in recent years. However, its logic has never been explored. We axiomatize the logic of sequence semantics, showing that it is the result of adding two new axioms to Stalnaker’s logic C2: one which is prima facie attractive, and one which is complex and difficult to assess. We also show that when sequence models are generalized to allow transfinite sequences, the result is the logic that adds only the first (more attractive) axiom to C2.

Logic, Natural Language and Semantic Paradoxes

Amit Pinsker

How should we respond to semantic paradoxes? I argue that the answer to this question depends on what we take the relation between logic and natural language to be. Focusing on the Liar paradox as a study case, I distinguish two approaches solving it: one approach (henceforth ‘NL’) takes logic to be a model of Natural Language, while the other (henceforth ‘CC’) suggests that a solution should be guided by Conceptual Considerations pertaining to truth. As it turns out, different solutions can be understood as taking one approach or the other. Furthermore, even solutions within the same ‘family’ take different approaches, and are motivated by NL and CC to different extents, which suggests that the distinction is not a binary one – NL and CC are two extremes of a spectrum.

Acknowledging this has two significant upshots. First, some allegedly competing solutions are in fact not competing, since they apply logic for different purposes. Thus, various objections and evaluations of solutions in the literature are in fact misplaced: they object to NL-solutions based on considerations that are relevant only to CC or vice versa. Second, the plausible explanation of this discrepancy is that NL and CC are two ways of cashing out what Priest calls “the canonical application of logic”: deductive ordinary reasoning. These two ways are based on two different assumptions about the fundamental relation between logic and natural language. The overall conclusion is thus that a better understanding of the relation between logic and natural language could give us a better understanding of what a good solution to semantic paradoxes should look like.

Detachability and LP: An Alternative Perspective

Thomas M Ferguson

In this talk, I aim to connect two strands of work related to strict-tolerant consequence and the logic of paradox (LP). First is work (“Monstrous Content and the Bounds of Discourse”, JPL) that argues that considerations of topic-theoretic conversational boundaries are captured by the strict-tolerant interpretation of weak Kleene matrices. Second is work (“Deep ST”, JPL) arguing that all the metainferential properties of inference rules in the strict-tolerant hierarchy are already encoded in standard LP. Synthesizing these two strands is particularly useful when asking about settings accommodating both topic-theoretic and veridical semantic defects. Importantly, this synthesis yields a novel defense of LP as a particularly compelling logic and allows a reevaluation of the failure of detachability for the LP conditional.

A Tale of Two Logics: Did Priest Get Lost in India?

Chris Rahlwes

With his extensive work on the Buddhist tetralemma (catuṣkoṭi) and the Jaina sevenfold sentences (saptabhaṅgī), Graham Priest has presented Indian logic as dialetheic, in which there are true contradictions. While Priest is not the only logician to present Indian logic as non-classical or paraconsistent, his dialetheic reading has gained widespread attraction among contemporary logicians. This attraction has led many logicians to posit that Priest is (historically) correct in his reading. However, those who study Buddhism and Jainism often do not share these convictions. The backlash from such specialists often simplifies Priest’s account and ignores the challenge that the dialetheic reading brings regarding the nature of negation. Following Priest’s claim that Aristotelian logic is incompatible with classical logic, I argue that Priest uses the wrong logical framework – the non-classical heir of classical logic – to understand Indian logic. In so doing, I present a neo-Pāṇinian or neo-Aristotelian account of Buddhist and Jaina logic emphasizing negation, denial, and (to a lesser degree) contradiction.

On VDV (Variable Designated Value) Logics

Graham Priest

In this talk I will isolate a class of logics which I shall call Variable Designated Values (VDV) Logics, and consider some of their properties. VDV logics are many-valued logics in which different sets of designated values are used for the premises and conclusions. The idea goes back, as far as I know, to Malinowski (1990) and (1994), though much use of the idea has been made by logicians recently in the form of the logics ST and TS (S= Strict; T = Tolerant).

Negated Implications in Connexive Relevant Logics

Andrew Tedder

This talk investigates the odd fact that one may add connexive theses to relevant logics, giving rise to contraclassical systems, and obtain logics which are not trivial, still obey many of the desired relevance properties, and yet allow one to prove every negated implication. I’ll show why this is the case, and investigate alternative connexive relevant logics in the area that don’t have this undesirable property.

Towards a diversified understanding of computability

Liesbeth De Mol

In this talk I will argue that we should care more for and be more careful with the history of computability making a plea for a more diverse and informed understanding. The starting point will be the much celebrated Turing machine model. Why is it that within the computability community, this model is often considered as the model? In the first part of this talk I review some of those reasons, showing how and why they are in need of a revision based, mostly, on historical arguments. On that basis I argue that, while surely, the Turing machine model is a basic one, part of its supposed superiority over other models is based on socio-historical forces. In part II then, I consider a number of historical, philosophical and technical arguments to support and elaborate the idea of a more diversified understanding of the history of computability. Central to those arguments will be the differentiation between, on the one hand, the logical equivalence between the different models with respect to the computable functions, and, on the other hand, some basic intensional differences between those very same models. To keep the argument clear, the main focus will be on the different models provided by Emil Leon Post but I will also include references to the work by Alonzo Church, Stephen C. Kleene and Haskell B. Curry.

Supported by the PROGRAMme project, ANR-17-CE38-0003-01.

Nothing is Logical

Maria Aloni

People often reason contrary to the prescriptions of classical logic. In the talk I will discuss some cases of divergence between everyday and logical-mathematical reasoning and propose that they are a consequence of a tendency in human cognition to neglect models which verify sentences by virtue of an empty configuration [neglect-zero tendency, Aloni 2022]. I will then introduce a bilateral state-based modal logic (BSML) which formally represents the neglect-zero tendency and can be used to rigorously study its impact on reasoning and interpretation. After discussing some of the applications, I will compare BSML with related systems (truthmaker semantics, possibility semantics, and inquisitive semantics) via translations into Modal Information Logic [van Benthem 2019].

Maria Aloni. Logic and conversation: The case of free choice, Semantics and Pragmatics, vol 15 (2022)
Johan van Benthem. Implicit and Explicit Stances in Logic, Journal of Philosophical Logic, vol 48, pages 571–601 (2019)

Modeling Linguistic Causation

Elitzur Bar-Asher Siegal

This talk introduces a systematic way of analyzing the semantics of causative linguistic expressions, and of how natural languages express causal relationships. For this purpose, I will employ the Structural Equation Modeling (SEM) framework and demonstrate how this method offers a rigorous model-theoretic approach to examining the distinct semantics of causal expressions. This paper introduces formal logical definitions of different types of conditions using SEM networks, and illustrates how this proposal, along with its formal tools, can help to clarify the asymmetric entailment relationship among different causative constructions.