Stalnaker’s minimal change semantics for conditionals fails to support the import-export law, according to which (a) and (b) are logically equivalent:
(a) if A, then if B, then C
(b) if A and B, then C
However, natural language conditionals seem to abide by the law. McGee (1985) outlines a minimal change semantics for conditionals that supports it. I argue that, in fact, the equivalence between (a) and (b) does not hold unrestrictedly, and I suggest that the facts follow from the interaction between the semantics of conditionals and the ways suppositions may affect the context. I conclude by describing the consequences of my account for the issue of the validity of modus ponens.
Circumstantialists already have a logical semantics for impossibilities. They expand their logical space of possible worlds by adding impossible worlds. These are impossible circumstances serving as indices of evaluation, at which impossibilities are true. A variant of circumstantialism, namely modal Meinongianism, adds impossible objects as well. The opposite of circumstantialism, namely structuralism, has some catching-up to do. What might a structuralist logical semantics without impossible worlds or impossible objects look like? This paper makes a structuralist counterproposal. I present a semantics based on a procedural interpretation of the typed l-calculus. The fundamental idea is that talk about impossibilities should be construed in terms of procedures yielding as their product a condition that could not possibly have a satisfier, or else failing to yield a product at all. Dispensing with a ‘bottom’ of impossibilia requires instead a ‘top’ consisting of structured hyperintensions, intensions, intensions defining other intensions, a typed universe, and dual predication. I explain how the theory works by going through a couple of cases.
The talk advocates a marriage of inferentialist bilateralism and truth-maker bilateralism. Inferentialist bilateralists like Restall and Ripley say that a collection of sentences, Y, follows from a collection of sentences, X, iff it is incoherent (or out-of-bounds) to assert all the sentences in X and, at the same time, deny all the sentences in Y. In Fine’s truth-maker theory, we have a partially ordered set of states that exactly verify and falsify sentences, and some of these states are impossible. We can think of making-true as the worldly analogue of asserting, of making-false as the worldly analogue of denying, and of impossibility as the worldly analogue of incoherence. This suggests that we may say that, in truth-maker theory, a collection of sentences, Y, follows (logically) from a collection of sentences, X, iff (in all models) any fusion of exact verifiers of the members of X and exact falsifiers of the member of Y is impossible. Under routine assumptions about truth-making, this yields classical logic. Relaxing one such assumption yields the non-transitive logic ST. Relaxing another assumption yields the non-reflexive logic TS. We can use known facts about the relation between ST, LP, and K3, to provide an interpretation of LP as the logic of falsifiers and K3 as the logic of verifiers. The resulting semantics for ST is more flexible than its usual three-valued semantics because it allows us, e.g., to reject monotonicity. We can also recover fine-grained logics, like Correia’s logic of factual equivalence.
(joint work with Norbert Gratzl, MCMP, Munich)
Free logics is a family of first-order logics which came about as a result of examining the existence assumptions of classical logic. What those assumptions are varies, but the central ones are that (i) the domain of interpretation is not empty, (ii) every name denotes exactly one object in the domain and (iii) the quantifiers have existential import. Free logics usually reject the claim that names need to denote in (ii), and of the systems considered in this paper, the positive free logic concedes that some atomic formulas containing non-denoting names (including self-identity) are true, while negative free logic rejects even the latter claim.
These logics have complex and varied axiomatizations and semantics, and the goal of the present work is to offer an orderly examination of the various systems and their mutual relations. This is done by first offering a formalization, using sequent calculi which possess all the desired structural properties of a good proof system, including admissibility of contraction and cut, while streamlining free logics in a way no other approach has. We then present a simple and unified system of generalized semantics, which allows for a straightforward demonstration of the meta-theoretical properties, while also offering insights into the relationship between different logics (free and classical). Finally, we extend the system with modalities by using a labeled sequent calculus, and here we are again able to map out the different approaches and their mutual relations using the same framework.
In this talk I will provide an overview of my recent investigations, some published some unpublished, on neologicism and in particular on the topics related to the good company and the bad company objections.
Semantic presuppositions are certain inferences associated with words or linguistic constructions. For example, if someone tells you that they “recently started doing yoga”, then this presupposes that they didn’t do yoga before.
A problem that has occupied semanticists for decades is how the presuppositions of a complex sentence can be computed from the presuppositions of its parts. Another way of putting this problem is, how do presuppositions project in various environments?
In this talk, I will discuss presupposition projection in one particular linguistic environment, namely in questions, arguing that it should be treated pragmatically. I will motivate a generalized version of Stalnaker’s bridge principle and show that it makes correct predictions for a range of different interrogative forms and different question uses.
Formal theories of grammar and traditional sentence processing models start from the assumption that the grammar is a system of rules. In such a system, only binary outcomes are generated: a sentence is well-formed if it follows the rules of the grammar and ill-formed otherwise. This dichotomous grammatical system faces a critical challenge, namely accounting for the intermediate/gradient modulations observable in experimental measures (e.g., sentences receive gradient acceptability judgments, speakers report a gradient ability to comprehend sentences that deviate from idealized grammatical forms, and various online sentence processing measures yield gradient effects). This challenge is traditionally met by accounting for gradient effects in terms of extra-grammatical factors (e.g., working memory limitations, reanalysis, semantics), which intervene after the syntactic module generates its output. As a test case, in this talk I will focus on a specific kind of violation that is at the core of the linguistic investigation: islands, a family of encapsulated syntactic domains that seem to prohibit the establishment of syntactic dependencies inside of them (Ross 1967). Islands are interesting because, although most linguistic theories treat them as fully ungrammatical and uninterpretable, I will present experimental evidence revealing gradient patterns of acceptability and evidence that some island violations are interpretable. To account for these gradient data, in this talk I explore the consequences of assuming a more flexible rule-based system, where sentential elements can be coerced, under specific circumstances, to play a role that does not fully fit them. In this system, unlike traditional ones, structure formation is forced even under sub-optimal circumstances, which generates semi-grammatical structures in a continuous grammar.
Ethan Brauer, Øystein Linnebo, and Stewart Shapiro
Modal logic has recently been used to analyze potential infinity and potentialism more generally. However, this analysis breaks down in cases of divergent possibilities, where the modal logic is weaker than S4.2. This talk has three aims. First, we use the intuitionistic theory of free choice sequences to motivate the need for a modal analysis of divergent potentialism and explain the challenge of connecting the ordinary theory of choice sequences with our modal explication. Then, we use the so-called Beth-Kripke semantics for intuitionistic logic to overcome those challenges. Finally, we apply the resulting modal analysis of divergent potentialism to make choice sequences comprehensible in classical terms.
A matching of a graph is any set of edges in which no two edges share a vertex. Steffens gave a necessary and sufficient condition for countable graphs to have a perfect matching (i.e., a matching that covers all vertices). We analyze the strength of Steffens’ theorem from the viewpoint of computability theory and reverse mathematics. By first restricting to certain kinds of graphs (e.g., graphs with bounded degree and locally finite graphs), we classify some weaker versions of Steffens’ theorem. We then analyze Steffens’ corollary on the existence of maximal matchings, which is critical to his proof of the main theorem. Finally, using methods of Aharoni, Magidor, and Shore, we give a partial result that helps hone in on the computational strength of Steffens’ theorem. Joint with Stephen Flood, Matthew Jura, and Oscar Levin.
Christine Ladd-Franklin is often hailed as a guiding star in the history of women in logic—not only did she study under C.S. Peirce and was one of the first women to receive a PhD from Johns Hopkins, she also, according to many modern commentators, solved a logical problem which had plagued the field of syllogisms since Aristotle. In this paper, we revisit this claim, posing and answering two distinct questions: Which
logical problem did Ladd-Franklin solve in her thesis, and which problem did she think she solved? We show that in neither case is the answer “a long-standing problem due to Aristotle”. Instead, what
Ladd-Franklin solved was a problem due to Jevons that was first articulated in the 19th century.