A number of inference patterns involving epistemic modalities, including Modus Ponens, display a split behavior. They seem valid by the lights of all-or-nothing judgments: whenever the premises are accepted as true, the conclusion must be accepted as true as well. But they seem invalid by the lights of probabilistic judgments: we can specify intuitive credal assignments where the drop in probability between premises and conclusions is incompatible with validity. I suggest that we explain this asymmetry by endorsing two bridge principles between logical notions and rational constraints on attitudes. The first links a notion of classical consequence to rational constraints on probabilistic states. The second links a broadly dynamic notion of consequence (so-called informational consequence) to rational constraints on acceptance. In existing literature, classical and informational consequence are seen as alternatives; I argue instead that we can and should have both.
Dan Lassiter (Stanford)
Recent years have seen successful new applications of probabilistic semantics to epistemic modality, and there are signs of a revival of a classic probabilistic approach to conditionals. These models have a simple implementation using tools from intensional and degree semantics, and they have considerable appeal in terms of capturing inferences that are problematic for classical models in some cases. However, they generally fare worse than classical models in two key respects – they have trouble explaining the update/learning potential of epistemics and conditionals as well as their interpretations when embedded. Focusing on epistemic language, I’ll describe some of the reasons for the probabilistic turn, explain why embeddings and learning present a problem, and suggest a way to deal with the problems by adopting an enriched probabilistic semantics based on Bayesian networks. Time permitting, I’ll venture some speculations about attractions of, and challenges to, an extension to conditionals in the style of Kratzer 1986.
Jon Gajeweski (UConn)
22 Apr, 2pm, LH 302
In this talk, I will discuss whether and how degree is represented in American Sign Language. I will begin by attempting to place American Sign Language into typologies of comparative constructions that have been based primarily on evidence from spoken languages. Placing ASL in such typologies leads us to the question of the role of iconicity in the representation of degree in ASL and how this relates to constructions in spoken language that may make use of pragmatic resources such as co-speech gesture. The results of the investigation of the comparative are then tested against the existence/structure of related degree constructions including measure phrases, differentials and degree questions.
Lenore Blum (Carnegie Mellon University)
Most logicians and theoretical computer scientists are familiar with Alan Turing’s 1936 seminal paper setting the stage for the foundational (discrete) theory of computation. Most however remain unaware of Turing’s 1948 seminal paper introducing the notion of condition, sets the stage for a natural theory of complexity for the “other theory of computation” emanating from the classical tradition of numerical analysis, equation solving and the continuous mathematics of calculus.
This talk will recognize Alan Turing’s work in the foundations of numerical computation (in particular, his 1948 paper “Rounding-Off Errors in Matrix Processes”), its influence in complexity theory today, and how it provides a unifying concept for the two major traditions of the Theory of Computation.
Whit Tabor (UConn)
Apr 15, 2pm, LH 302
Consider an iterated map f on a connected space X. A proper subset, A, of X, is said to be asymptotically stable if, when the system is perturbed slightly away from A, it converges back on to A, under continued iteration of f. When one is working in the realm of logic, or classical computation more broadly, stability is not an issue and does not really make sense. There is no possibility of being slightly off—formulas are either well-formed and have precise meaning, or they are ill-formed and they have no meaning at all. Interestingly, in recent decades, a number of people have discovered ways of doing complex computation on connected spaces. This raises the question of whether there could be “stable computation” in such systems. In this talk, I define asymptotic stability for a particular type of connected space computer and show that at least one interesting class of languages (the mirror recursion languages) has a realization which exhibits asymptotic stability of certain fractal sets. One interesting outcome is an answer to the question: Why is there grammaticality?
Timothy Williamson (University of Oxford)
23 Mar, 4:30pm, Laurel 201.
The choice between alternative logics can be understood as a special case of theory choice in science, governed by broadly abductive criteria, without appeal to a special relation of logical consequence. This view will be applied to semantic and sorites paradoxes as putative motivations for departures from classical logic.
Timothy Williamson (University of Oxford)
22 Mar, 4:30pm, Oak 105.
The categories of metaphysical possibility and metaphysical possibility have been criticized as unscientific and unsuitable for serious theorizing. I will develop an alternative view on which they are limiting cases of a family of objective modalities many members of which are studied in natural science, and discuss the modal aspect of applied mathematics. Some consequences for the logic of metaphysical modality will be discussed.
Timothy Williamson (University of Oxford)
24 Mar, 4pm, Class of 1947 Room.
If 5+7 had turned out to be 13, everyone would have danced in the streets with joy.” That conditional is true according to standard theories of conditionals, simply because it is impossible for 5 + 7 to turn out to be 13. Many philosophers regard such consequences as obviously wrong. I will explain why significant issues are at stake in this dispute, how our judgments about such matters may be misled by fallible heuristics, and why the standard view may be right after all.
Parasara Duggirala (University of Connecticut)
4 Mar, 2pm, LH 302.
This talk will introduce Hoare logic, one of the most (if the the most) influential works in program verification. I will draw parallels between propositional and predicate logic and present a rudimentary way (the way it was done in 1960s) for verifying programs.
Jacob Archambault (Fordham University)
19 Feb, 2pm, LH 302.
The idea of a conclusion following formally from a set of premises is central to our conception of logic today: logical consequence is often taken for the subject matter of logic, and ‘formal consequence’ and ‘logical consequence’ are routinely taken as synonyms. It would come as a surprise, then, to know that consequences generally and formal consequence in particular did not always hold this place of prominence: the first treatises specifically devoted to the study of consequences did not appear until the beginning of the fourteenth century – over 1600 years after the death of Aristotle; and the notion of formal consequence didn’t begin to take a shape resembling its modern successor until nearly a half century after these treatises appeared.
Prior to the later nineteenth century, the main developments of the concept of formal consequence as we know it occur in the following stages:
- The application of the form/matter distinction to logic;
- The earliest implicit appeals to a distinction between formal and material consequence;
- The appearance of the earliest treatises on consequences;
- The appearance of the first systematic attempts to parse a distinction between formal and material consequence;
- The identification of formal consequences with those holding for all permutations of categorematic terms;
- The identification of formal consequence with logical consequence;
- The identification of logical consequence as the subject matter of logic.
All but the first and last of these developments occur in a period of intense logical activity stretching from about 1285 to 1341, just over half a century. This talk provides an overview of the developments of this period. I begin with an outline of the topical tradition on which earlier treatises on consequences depended. Next, I detail the account of John Buridan, whose distinction between formal and material consequences is more familiar than others, both because it has been better researched and because of its basic similarity to modern model-theoretic accounts. From here, I move backward to the previous developments Buridan’s account relies on and engages with, particularly those of William of Ockham and Walter Burley.