Abstracts

Reductions between problems in reverse math and computability

Denis Hirschfeldt

Many mathematical principles can be stated in the form “for all X such that C(X) holds, there is a Y such that D(X,Y) holds”, where X and Y range over second-order objects, and C and D are arithmetic conditions. We can think of such a principle as a problem, where an instance of the problem is an X such that C(X) holds, and a solution to this instance is a Y such that D(X,Y) holds. I will discuss notions of reducibility between such problems coming from the closely-related perspectives of reverse mathematics and computability theory.

The interaction between demonstratives and relative clauses – a view from Mandarin

I-Ta Chris Hsieh, National Tsing Hua University, Taiwan

A demonstrative such as that and those may have various uses, be it deictic, anaphoric, or purely descriptive. One of the approaches to demonstratives, namely the Hidden Argument Theory (henceforth, HAT) approach (e.g., King 2001, Blumberg 2020, Nowak 2021, Ahn 2022; a.o.,), is intended to provide a unified account of these uses. In the HAT approach, a demonstrative, unlike a definite article (e.g., the), carries two restrictions; in one recent variant of this approach, namely Ahn (2022, Ling & Phil), the various uses of a demonstrative description are due to the different options to contribute the second restriction, including a deictic demonstration, an index, and a relative clause that adjoins to the demonstrative phrase. In this talk, I examine the interaction between demonstratives and relative clauses and show that current analyses along with the HAT approach could yield some undesirable predictions. Some amendments will be suggested to accurately capture the interpretation of a demonstrative description with a relative clause and at the same time avoid these predictions.

Causal dependence in actuality inferences

Prerna Nadathur

A range of complement-taking predicates give rise to surprising actuality inferences, in which a modally-embedded complement event is understood non-modally, as taking place in the evaluation world.  I argue that actuality inferences can be explained—and unified across predicate classes—on an approach in which the modality of participating predicates is analyzed in causal terms.  This talk focuses on an illuminating case study: enough and too predicates.

Hacquard (2005) observes that, like the ability modals at the heart of the puzzle (Bhatt 1999), enough/too predicates have aspect-sensitive actuality inferences.  Under imperfective marking, French enough predicates like (1a) are compatible with the non-actualization of their complements; their perfective counterparts (as in 1b) show the complement entailment pattern of implicative verbs like French réussir (‘succeed’, ‘manage’; 2).

(1) a. Juno était assez rapide pour gagner la course, mais elle n’a pas gagné.
Juno was-IMPF fast enough to win the race, but she did not win.
b. Juno a été assez rapide pour gagner la course, #mais elle n’a pas gagné.
Juno was-PFV fast enough to win the race, #but she did not win.
(2) Juno { réussissait / a réussi } à gagner la course, #mais elle n’a pas gagné.
Juno { managed-IMPF / managed-PFV } to win the race, #but she did not win.

Despite the contrast in (1)-(2), I argue that enough/too inferences are—semantically speaking—instances of implicativity.  I build on a causal account of implicative lexical semantics (Nadathur 2016, 2019) to show that enough/too actuality inferences arise just in case the compositional interaction between grammatical aspect, modal flavour, and the enough/too matrix adjective reproduces the semantic structure of an implicative: that is, where the matrix adjective denotes an actionable property which is causally involved in realizing the enough/too complement, and the perfective aspect induces an eventive interpretation of the matrix assertion.   Insofar as the implicative analysis explains the aspect-sensitivity of enough/too inferences, I suggest that it naturally extends to ability modals’ actuality inferences, when coupled with a causal approach to ability.

Towards a Structuralist Metasemantics for Number Words

Eric Snyder

According to non-eliminative structuralism, the referents of numerical singular terms, such as the numeral ‘two’ or ‘the number two’, are numbers, construed as positions within the natural number structure. However, a potential problem comes in the form of sentences like ‘{∅, {∅}} is the number two among the von Neumann ordinals’. If this is an identity statement, then its truth would seemingly require identifying the second position of the natural number structure with a particular set, thus giving rise to a version of Benacerraf’s famous Identification Problem. In response, Stewart Shapiro (1997) draws an analogy to expressions like ‘the Vice President’, which are ambiguous between denoting an office-holder (e.g. Kamala Harris) or an office (the office of the Vice Presidency). Similarly, Shapiro suggests that in ordinary arithmetic contexts, such as ‘Two is less than three’, we view positions as analogous to office-holders, while in other contexts, we view them instead as analogous to offices occupied by entities playing the role of numbers, e.g. {∅, {∅}}. However, this suggestion faces two serious challenges. First, what exactly is the nature of this purported ambiguity, and what empirical evidence, if any, is there for it? Second, even if we grant the ambiguity, we appear to get a revenge version of the Identification Problem anyway: just permute the positions within the natural number structure. The purpose of this talk is to defend Shapiro’s ambiguity thesis, by supplying the empirical support required, and explaining how, when appropriately understood, the semantics assumed does not give rise to a revenge form of the Identification Problem.

Is Functionalism Inconsistent?

Owain Griffin (OSU)

Starting with Bealer (1978), some authors have claimed that Beth-style definability results show functionalism about the mind to be inconsistent. If these arguments go through, then the Beth result provides a way of collapsing functionalism into reductionism – exactly what functionalists purport to deny. While this has received discussion in the literature (See Hellman & Thompson (1975), Block (1980), Tennant (1985)) it has recently been resuscitated and refined by Halvorson (2019). In this paper, we question the argument’s accuracy, and propose a new objection to it. We claim that in order to derive its conclusion, the argument relies fundamentally on equivocations concerning the notion of definability.

The logic of sequences

Cian Dorr and Matthew Mandelkern (NYU)

In the course of proving a tenability result about the probabilities of conditionals, van Fraassen (1976) introduced a semantics for conditionals based on sequences of worlds, representing a particularly simple special case of ordering semantics for conditionals. According to sequence semantics, ‘If p, then q’ is true at a sequence just in case either q is true at the first truncation of the sequence where p is true, or there is no truncation where p is true. This approach has become increasingly popular in recent years. However, its logic has never been explored. We axiomatize the logic of sequence semantics, showing that it is the result of adding two new axioms to Stalnaker’s logic C2: one which is prima facie attractive, and one which is complex and difficult to assess. We also show that when sequence models are generalized to allow transfinite sequences, the result is the logic that adds only the first (more attractive) axiom to C2.

Logic, Natural Language and Semantic Paradoxes

Amit Pinsker

How should we respond to semantic paradoxes? I argue that the answer to this question depends on what we take the relation between logic and natural language to be. Focusing on the Liar paradox as a study case, I distinguish two approaches solving it: one approach (henceforth ‘NL’) takes logic to be a model of Natural Language, while the other (henceforth ‘CC’) suggests that a solution should be guided by Conceptual Considerations pertaining to truth. As it turns out, different solutions can be understood as taking one approach or the other. Furthermore, even solutions within the same ‘family’ take different approaches, and are motivated by NL and CC to different extents, which suggests that the distinction is not a binary one – NL and CC are two extremes of a spectrum.

Acknowledging this has two significant upshots. First, some allegedly competing solutions are in fact not competing, since they apply logic for different purposes. Thus, various objections and evaluations of solutions in the literature are in fact misplaced: they object to NL-solutions based on considerations that are relevant only to CC or vice versa. Second, the plausible explanation of this discrepancy is that NL and CC are two ways of cashing out what Priest calls “the canonical application of logic”: deductive ordinary reasoning. These two ways are based on two different assumptions about the fundamental relation between logic and natural language. The overall conclusion is thus that a better understanding of the relation between logic and natural language could give us a better understanding of what a good solution to semantic paradoxes should look like.

Detachability and LP: An Alternative Perspective

Thomas M Ferguson

In this talk, I aim to connect two strands of work related to strict-tolerant consequence and the logic of paradox (LP). First is work (“Monstrous Content and the Bounds of Discourse”, JPL) that argues that considerations of topic-theoretic conversational boundaries are captured by the strict-tolerant interpretation of weak Kleene matrices. Second is work (“Deep ST”, JPL) arguing that all the metainferential properties of inference rules in the strict-tolerant hierarchy are already encoded in standard LP. Synthesizing these two strands is particularly useful when asking about settings accommodating both topic-theoretic and veridical semantic defects. Importantly, this synthesis yields a novel defense of LP as a particularly compelling logic and allows a reevaluation of the failure of detachability for the LP conditional.

A Tale of Two Logics: Did Priest Get Lost in India?

Chris Rahlwes

With his extensive work on the Buddhist tetralemma (catuṣkoṭi) and the Jaina sevenfold sentences (saptabhaṅgī), Graham Priest has presented Indian logic as dialetheic, in which there are true contradictions. While Priest is not the only logician to present Indian logic as non-classical or paraconsistent, his dialetheic reading has gained widespread attraction among contemporary logicians. This attraction has led many logicians to posit that Priest is (historically) correct in his reading. However, those who study Buddhism and Jainism often do not share these convictions. The backlash from such specialists often simplifies Priest’s account and ignores the challenge that the dialetheic reading brings regarding the nature of negation. Following Priest’s claim that Aristotelian logic is incompatible with classical logic, I argue that Priest uses the wrong logical framework – the non-classical heir of classical logic – to understand Indian logic. In so doing, I present a neo-Pāṇinian or neo-Aristotelian account of Buddhist and Jaina logic emphasizing negation, denial, and (to a lesser degree) contradiction.