Owain Griffin (OSU)
Starting with Bealer (1978), some authors have claimed that Beth-style definability results show functionalism about the mind to be inconsistent. If these arguments go through, then the Beth result provides a way of collapsing functionalism into reductionism – exactly what functionalists purport to deny. While this has received discussion in the literature (See Hellman & Thompson (1975), Block (1980), Tennant (1985)) it has recently been resuscitated and refined by Halvorson (2019). In this paper, we question the argument’s accuracy, and propose a new objection to it. We claim that in order to derive its conclusion, the argument relies fundamentally on equivocations concerning the notion of definability.
Cian Dorr and Matthew Mandelkern (NYU)
In the course of proving a tenability result about the probabilities of conditionals, van Fraassen (1976) introduced a semantics for conditionals based on sequences of worlds, representing a particularly simple special case of ordering semantics for conditionals. According to sequence semantics, ‘If p, then q’ is true at a sequence just in case either q is true at the first truncation of the sequence where p is true, or there is no truncation where p is true. This approach has become increasingly popular in recent years. However, its logic has never been explored. We axiomatize the logic of sequence semantics, showing that it is the result of adding two new axioms to Stalnaker’s logic C2: one which is prima facie attractive, and one which is complex and difficult to assess. We also show that when sequence models are generalized to allow transfinite sequences, the result is the logic that adds only the first (more attractive) axiom to C2.
How should we respond to semantic paradoxes? I argue that the answer to this question depends on what we take the relation between logic and natural language to be. Focusing on the Liar paradox as a study case, I distinguish two approaches solving it: one approach (henceforth ‘NL’) takes logic to be a model of Natural Language, while the other (henceforth ‘CC’) suggests that a solution should be guided by Conceptual Considerations pertaining to truth. As it turns out, different solutions can be understood as taking one approach or the other. Furthermore, even solutions within the same ‘family’ take different approaches, and are motivated by NL and CC to different extents, which suggests that the distinction is not a binary one – NL and CC are two extremes of a spectrum.
Acknowledging this has two significant upshots. First, some allegedly competing solutions are in fact not competing, since they apply logic for different purposes. Thus, various objections and evaluations of solutions in the literature are in fact misplaced: they object to NL-solutions based on considerations that are relevant only to CC or vice versa. Second, the plausible explanation of this discrepancy is that NL and CC are two ways of cashing out what Priest calls “the canonical application of logic”: deductive ordinary reasoning. These two ways are based on two different assumptions about the fundamental relation between logic and natural language. The overall conclusion is thus that a better understanding of the relation between logic and natural language could give us a better understanding of what a good solution to semantic paradoxes should look like.
Thomas M Ferguson
In this talk, I aim to connect two strands of work related to strict-tolerant consequence and the logic of paradox (LP). First is work (“Monstrous Content and the Bounds of Discourse”, JPL) that argues that considerations of topic-theoretic conversational boundaries are captured by the strict-tolerant interpretation of weak Kleene matrices. Second is work (“Deep ST”, JPL) arguing that all the metainferential properties of inference rules in the strict-tolerant hierarchy are already encoded in standard LP. Synthesizing these two strands is particularly useful when asking about settings accommodating both topic-theoretic and veridical semantic defects. Importantly, this synthesis yields a novel defense of LP as a particularly compelling logic and allows a reevaluation of the failure of detachability for the LP conditional.
With his extensive work on the Buddhist tetralemma (catuṣkoṭi) and the Jaina sevenfold sentences (saptabhaṅgī), Graham Priest has presented Indian logic as dialetheic, in which there are true contradictions. While Priest is not the only logician to present Indian logic as non-classical or paraconsistent, his dialetheic reading has gained widespread attraction among contemporary logicians. This attraction has led many logicians to posit that Priest is (historically) correct in his reading. However, those who study Buddhism and Jainism often do not share these convictions. The backlash from such specialists often simplifies Priest’s account and ignores the challenge that the dialetheic reading brings regarding the nature of negation. Following Priest’s claim that Aristotelian logic is incompatible with classical logic, I argue that Priest uses the wrong logical framework – the non-classical heir of classical logic – to understand Indian logic. In so doing, I present a neo-Pāṇinian or neo-Aristotelian account of Buddhist and Jaina logic emphasizing negation, denial, and (to a lesser degree) contradiction.
In this talk I will isolate a class of logics which I shall call Variable Designated Values (VDV) Logics, and consider some of their properties. VDV logics are many-valued logics in which different sets of designated values are used for the premises and conclusions. The idea goes back, as far as I know, to Malinowski (1990) and (1994), though much use of the idea has been made by logicians recently in the form of the logics ST and TS (S= Strict; T = Tolerant).