I argue for a distinction between eventive and evidential speech reports. In eventive speech reports the at-issue contribution is the introduction of a speech event with certain properties. Typical examples include direct and free indirect speech. In evidential speech reports, by contrast, the fact that something was said is not at issue, but serves to provide evidence for the reported content. Typical examples include Quechua reportative evidential morphology, Dutch reportative modals, or German reportive subjunctive. Following up on an observation by Von Stechow & Zimmermann (2005:fn.16), I argue that English indirect discourse is ambiguous. In the current framework this means it allows both an eventive reading, where a reported speech act is at issue, and an evidential reading, where it is backgrounded.
The semantics of degree constructions has motivated the implementation of a MAX operator, a function from a set of degrees to its maximal member (von Stechow 1985, Rullmann 1995, a.o.). This operator is unsatisfying: it’s arbitrary (cf. MIN), and therefore not explanatory. There have thus been several calls to reduce MAX to a more pragmatic principle of maximal informativity (Dayal 1996, Beck & Rullmann 1999, Fox & Hackl 2007, von Fintel et al. 2014).
Intriguing differences between before and after have caused some to posit an EARLIEST operator in the temporal domain (Beaver & Condoravdi 2003, Condoravdi 2010). This operator is unsatisfying for similar reasons (cf. LATEST), and some have suggested it, too, can be redefined in terms of informativity (Rett 2015). However, recent cross-linguistic evidence (reported here) complicates the reduction of EARLIEST to `maximize informativity’: while counterparts of `before’ and `after’ across languages share many foundational semantic properties, they appear to differ in a principled way in how certain before constructions are interpreted. I discuss this and other related observations with respect to the future of a domain-general `maximize informativity’ program.
The following is an example of a counterfactual conditional in English:
(1) If I had thrown a six, I would have won the game.
One normally infers from (1) that the antecedent is counterfactual (i.e. I did not throw a six; I write this as CF-p), and that the consequent is counterfactual (i.e. I did not win the game; written CF-q). Whereas most previous literature focuses on the counterfactuality of the antecedent exclusively (perhaps assuming that an analysis of CF-p extends to CF-q), this work provides an analysis for how the counterfactual inference of the consequent (CF-q) is generated, and explains its empirical distribution.
I identify several contexts in which the CF-q inference gets cancelled. In some of these, cancellation is the result of the presence of a specific lexical item (such as “also”). In other cases, it is the intonation contour of the conditional that leads to cancellation. By analyzing the topic-focus structure of conditionals, I argue that the various contexts in which CF-q gets cancelled have a pragmatic property in common: they are multiple cause contexts. This means that they make more than one cause for the same consequent salient.
The next step in my analysis is adopting an idea going back to Karttunen (1971), who suggests thatconditional perfection (the pragmatic strengthening of conditionals to biconditionals) is a necessary ingredient for the CF-q inference to arise. The key prediction, which has not been explored before, is that if for some reason conditional perfection is not triggered, the CF-q inference is not drawn. I derive the independent result that multiple cause contexts do not trigger conditional perfection. This provides the desired explanation of why in multiple cause contexts the CF-q inference is not drawn. This analysis opens a new way to investigate counterfactuality, namely by using tools from the study of discourse (questions and answers, topic and focus, exhaustivity). Finally, I sketch some directions of future work on how using causal networks to represent multiple causation can be applied to the pragmatics of counterfactual conditionals.
One traditional role of a logic of a logic of entailment is as a set of closure principles for theories. Looking at logics in this way, and as theories themselves, can be very interesting. A logic determines a closure operator (in Tarski’s sense) on sets of formulas. The theories generated by a closure operator themselves (sometimes) determine closure operators. Looking at the space of theories generated by a “master theory”, and the interaction of the closure operators that they determine, I motivate a variety of different logical systems.
26 April 2017: Noah Schweber
The notion of computable function lies comfortably at the intersection of philosophy and mathematics – it describes something intuitively meaningful, and has a satisfying formalization which matches soundly with that intuition (this is Church’s thesis). However, this is true only in a restricted context, namely when we look at functions *of natural numbers*. When we try to generalize computability substantially, we run into both philosophical and mathematical problems. After saying a bit about why generalizing computability theory is something we should be interested in doing, I’ll present some approaches to generalized computability and where they run into problems (and hopefully some ideas for how to get around those problems).
Logic has always played a central role in the study of natural language meaning. But logic can also be used to describe the structure of words and sentences. Recent research has revealed that these structures are so simple that they can be modeled with very weak fragments of first-order logic. Unfortunately, many of these fragments are still not particularly well-understood on a formal level, which has become a serious impediment to ongoing research. This talk is thus equally about the known and the unknown: I will survey the empirically relevant fragments of first-order logic and explain how they allow for completely new generalizations about linguistic structures at the word and sentence level. But I will also highlight the limits of our current understanding and which mathematical challenges need to be overcome if the logical approach to natural language is to realize its full potential. Hopefully, an alliance of linguists, logicians, and computer scientists will be able to solve these problems in the near future.
In 1985, Flagg produced a model of first-order Peano arithmetic and a modal principle known as Epistemic Church’s Thesis, which roughly expresses that any number-theoretic function known to be total is recursive. In some recent work (), this construction was generalized to allow a construction of models of quantified modal logic on top of just about any of the traditional realizability models of various intuitionistic systems, such as fragments of second-order arithmetic and set theory. In this talk, we survey this construction and indicate what is known about the reduct of these structures to the non-modal language.
References:  B. G. Rin and S. Walsh. Realizability semantics for quantified modal logic: Generalizing Flagg’s 1985 construction. The Review of Symbolic Logic, 9(4):752–809, 2016.
Linda Brown Westrick
Various notions of computable reducibility, such as Turing reduction, truth table reduction, and many-one reduction, provide coarse- and fine-grained ways of saying that one infinite sequence can compute another one. Infinite sequences have discrete bits with discrete addresses, so what it means to “compute” one is clear: given an address as input, an algorithm should return the appropriate bit. What it means to “compute” a continuous function is less obvious, but also well established. However, for a larger class of functions (in particular those of the Baire hierarchy, which I will define) it is not at all clear what it means to compute one. I will present one possibility and describe its degree structure and how this structure relates to features of the Baire hierarchy. This is joint work with Adam Day and Rod Downey.
My favored joint solution to the Puzzle of Free Choice Permission (Kamp 1973) and Ross’s Paradox (Ross 1941) involves (i) giving up the duality of natural language deontic modals, and (ii) moving to a two-dimensional propositional logic which has a classical Boolean character only as a special case. In this talk, I’d like to highlight two features of this radical view: first, the extent to which Boolean disjunction is imperiled by other natural language phenomena not involving disjunction, and second, the strength of the general position that natural language semantics must treat deontic, epistemic, and circumstantial modals alike.
Michael Rescorla has argued that it makes sense to compute directly with numbers, and he faulted Turing for not giving an analysis of number-theoretic computability. However, in line with a later paper of his, it only makes sense to compute directly with syntactic entities, such as strings on a given alphabet. Computing with numbers goes via notation. This raises broader issues involving de re propositional attitudes towards numbers and other non-syntactic abstract entities.