The following is an example of a counterfactual conditional in English:
(1) If I had thrown a six, I would have won the game.
One normally infers from (1) that the antecedent is counterfactual (i.e. I did not throw a six; I write this as CF-p), and that the consequent is counterfactual (i.e. I did not win the game; written CF-q). Whereas most previous literature focuses on the counterfactuality of the antecedent exclusively (perhaps assuming that an analysis of CF-p extends to CF-q), this work provides an analysis for how the counterfactual inference of the consequent (CF-q) is generated, and explains its empirical distribution.
I identify several contexts in which the CF-q inference gets cancelled. In some of these, cancellation is the result of the presence of a specific lexical item (such as “also”). In other cases, it is the intonation contour of the conditional that leads to cancellation. By analyzing the topic-focus structure of conditionals, I argue that the various contexts in which CF-q gets cancelled have a pragmatic property in common: they are multiple cause contexts. This means that they make more than one cause for the same consequent salient.
The next step in my analysis is adopting an idea going back to Karttunen (1971), who suggests thatconditional perfection (the pragmatic strengthening of conditionals to biconditionals) is a necessary ingredient for the CF-q inference to arise. The key prediction, which has not been explored before, is that if for some reason conditional perfection is not triggered, the CF-q inference is not drawn. I derive the independent result that multiple cause contexts do not trigger conditional perfection. This provides the desired explanation of why in multiple cause contexts the CF-q inference is not drawn. This analysis opens a new way to investigate counterfactuality, namely by using tools from the study of discourse (questions and answers, topic and focus, exhaustivity). Finally, I sketch some directions of future work on how using causal networks to represent multiple causation can be applied to the pragmatics of counterfactual conditionals.
One traditional role of a logic of a logic of entailment is as a set of closure principles for theories. Looking at logics in this way, and as theories themselves, can be very interesting. A logic determines a closure operator (in Tarski’s sense) on sets of formulas. The theories generated by a closure operator themselves (sometimes) determine closure operators. Looking at the space of theories generated by a “master theory”, and the interaction of the closure operators that they determine, I motivate a variety of different logical systems.
26 April 2017: Noah Schweber
The notion of computable function lies comfortably at the intersection of philosophy and mathematics – it describes something intuitively meaningful, and has a satisfying formalization which matches soundly with that intuition (this is Church’s thesis). However, this is true only in a restricted context, namely when we look at functions *of natural numbers*. When we try to generalize computability substantially, we run into both philosophical and mathematical problems. After saying a bit about why generalizing computability theory is something we should be interested in doing, I’ll present some approaches to generalized computability and where they run into problems (and hopefully some ideas for how to get around those problems).
Logic has always played a central role in the study of natural language meaning. But logic can also be used to describe the structure of words and sentences. Recent research has revealed that these structures are so simple that they can be modeled with very weak fragments of first-order logic. Unfortunately, many of these fragments are still not particularly well-understood on a formal level, which has become a serious impediment to ongoing research. This talk is thus equally about the known and the unknown: I will survey the empirically relevant fragments of first-order logic and explain how they allow for completely new generalizations about linguistic structures at the word and sentence level. But I will also highlight the limits of our current understanding and which mathematical challenges need to be overcome if the logical approach to natural language is to realize its full potential. Hopefully, an alliance of linguists, logicians, and computer scientists will be able to solve these problems in the near future.
In 1985, Flagg produced a model of first-order Peano arithmetic and a modal principle known as Epistemic Church’s Thesis, which roughly expresses that any number-theoretic function known to be total is recursive. In some recent work (), this construction was generalized to allow a construction of models of quantified modal logic on top of just about any of the traditional realizability models of various intuitionistic systems, such as fragments of second-order arithmetic and set theory. In this talk, we survey this construction and indicate what is known about the reduct of these structures to the non-modal language.
References:  B. G. Rin and S. Walsh. Realizability semantics for quantified modal logic: Generalizing Flagg’s 1985 construction. The Review of Symbolic Logic, 9(4):752–809, 2016.
Linda Brown Westrick
Various notions of computable reducibility, such as Turing reduction, truth table reduction, and many-one reduction, provide coarse- and fine-grained ways of saying that one infinite sequence can compute another one. Infinite sequences have discrete bits with discrete addresses, so what it means to “compute” one is clear: given an address as input, an algorithm should return the appropriate bit. What it means to “compute” a continuous function is less obvious, but also well established. However, for a larger class of functions (in particular those of the Baire hierarchy, which I will define) it is not at all clear what it means to compute one. I will present one possibility and describe its degree structure and how this structure relates to features of the Baire hierarchy. This is joint work with Adam Day and Rod Downey.
My favored joint solution to the Puzzle of Free Choice Permission (Kamp 1973) and Ross’s Paradox (Ross 1941) involves (i) giving up the duality of natural language deontic modals, and (ii) moving to a two-dimensional propositional logic which has a classical Boolean character only as a special case. In this talk, I’d like to highlight two features of this radical view: first, the extent to which Boolean disjunction is imperiled by other natural language phenomena not involving disjunction, and second, the strength of the general position that natural language semantics must treat deontic, epistemic, and circumstantial modals alike.
Michael Rescorla has argued that it makes sense to compute directly with numbers, and he faulted Turing for not giving an analysis of number-theoretic computability. However, in line with a later paper of his, it only makes sense to compute directly with syntactic entities, such as strings on a given alphabet. Computing with numbers goes via notation. This raises broader issues involving de re propositional attitudes towards numbers and other non-syntactic abstract entities.
The consensus for the last century or so has been that diagrammatic proofs are not genuine proofs. Recent philosophical work, however, has shown that (at least in some circumstances) diagrams can be perfectly rigorous. The implication of this work is that, if diagrammatic reasoning in a particular field is illegitimate, it must be so for local reasons, not because of some in-principle illegitimacy of diagrammatic reasoning in general. In this talk, I try to identify some of the reasons why geometers in particular began to reject diagrammatic proofs. I argue that the reasons often cited nowadays — that diagrams illicitly infer from a particular to all cases, or can’t handle analytic notions like continuity — played little role in this development. I highlight one very significant (but rarely discussed) flaw in diagrammatic reasoning: diagrammatic methods don’t allow for fully general proofs of theorems. I explain this objection (which goes back to Descartes), and how Poncelet and his school developed around 1820 new diagrammatic methods to meet this objection. As I explain, these new methods required a kind of diagrammatic reasoning that is fundamentally different from the now well-known diagrammatic method from Euclid’s Elements. And, as I show (using the case of synthetic treatments of the duals of curves of degrees higher than 2), it eventually became clear that this method does not work. Truly general results in “modern” geometry could not be proven diagrammatically.
Many proofs use a bootstrapping process. Initially, a simple version is proved, and then extensions are proved by repeated applications of the simple result. Proofs of many results about graph colorings use this strategy. We will concentrate on a specific example. Suppose we write RT(2,n) for Ramsey’s theorem for pairs and n colors, a graph coloring theorem. (Pictures will be provided.) The usual proof of RT(2,4) uses two applications of RT(2,2). In this talk, we address the question: Can RT(2,4) be proved with one use of RT(2,2)? Of course, the answer depends on the base system chosen and the formalization of what constitutes a use. In some settings, a formalization of Weihrauch reductions can lend insight, and the law of the excluded middle plays a surprising role.