18 Nov 2016: Why Did Geometers Stop Using Diagrams?

Jeremy Heis

The consensus for the last century or so has been that diagrammatic proofs are not genuine proofs. Recent philosophical work, however, has shown that (at least in some circumstances) diagrams can be perfectly rigorous. The implication of this work is that, if diagrammatic reasoning in a particular field is illegitimate, it must be so for local reasons, not because of some in-principle illegitimacy of diagrammatic reasoning in general. In this talk, I try to identify some of the reasons why geometers in particular began to reject diagrammatic proofs. I argue that the reasons often cited nowadays — that diagrams illicitly infer from a particular to all cases, or can’t handle analytic notions like continuity — played little role in this development. I highlight one very significant (but rarely discussed) flaw in diagrammatic reasoning: diagrammatic methods don’t allow for fully general proofs of theorems. I explain this objection (which goes back to Descartes), and how Poncelet and his school developed around 1820 new diagrammatic methods to meet this objection. As I explain, these new methods required a kind of diagrammatic reasoning that is fundamentally different from the now well-known diagrammatic method from Euclid’s Elements. And, as I show (using the case of synthetic treatments of the duals of curves of degrees higher than 2), it eventually became clear that this method does not work. Truly general results in “modern” geometry could not be proven diagrammatically.

21 Oct 2016: Counting uses of a theorem

Jeff Hirst

Many proofs use a bootstrapping process. Initially, a simple version is proved, and then extensions are proved by repeated applications of the simple result. Proofs of many results about graph colorings use this strategy. We will concentrate on a specific example. Suppose we write RT(2,n) for Ramsey’s theorem for pairs and n colors, a graph coloring theorem. (Pictures will be provided.) The usual proof of RT(2,4) uses two applications of RT(2,2). In this talk, we address the question: Can RT(2,4) be proved with one use of RT(2,2)? Of course, the answer depends on the base system chosen and the formalization of what constitutes a use. In some settings, a formalization of Weihrauch reductions can lend insight, and the law of the excluded middle plays a surprising role.

9 Sep 2016: Reasoning and Consequence for Indicative Conditionals

Paolo Santorio

A number of inference patterns involving epistemic modalities, including Modus Ponens, display a split behavior. They seem valid by the lights of all-or-nothing judgments: whenever the premises are accepted as true, the conclusion must be accepted as true as well. But they seem invalid by the lights of probabilistic judgments: we can specify intuitive credal assignments where the drop in probability between premises and conclusions is incompatible with validity. I suggest that we explain this asymmetry by endorsing two bridge principles between logical notions and rational constraints on attitudes. The first links a notion of classical consequence to rational constraints on probabilistic states. The second links a broadly dynamic notion of consequence (so-called informational consequence) to rational constraints on acceptance. In existing literature, classical and informational consequence are seen as alternatives; I argue instead that we can and should have both.

2 Sep 2016: Learning from and embedding probabilistic language

Dan Lassiter (Stanford)

Recent years have seen successful new applications of probabilistic semantics to epistemic modality, and there are signs of a revival of a classic probabilistic approach to conditionals. These models have a simple implementation using tools from intensional and degree semantics, and they have considerable appeal in terms of capturing inferences that are problematic for classical models in some cases. However, they generally fare worse than classical models in two key respects – they have trouble explaining the update/learning potential of epistemics and conditionals as well as their interpretations when embedded. Focusing on epistemic language, I’ll describe some of the reasons for the probabilistic turn, explain why embeddings and learning present a problem, and suggest a way to deal with the problems by adopting an enriched probabilistic semantics based on Bayesian networks. Time permitting, I’ll venture some speculations about attractions of, and challenges to, an extension to conditionals in the style of Kratzer 1986.

The representation of degree in American Sign Language

Jon Gajeweski (UConn)

22 Apr, 2pm, LH 302

In this talk, I will discuss whether and how degree is represented in American Sign Language. I will begin by attempting to place American Sign Language into typologies of comparative constructions that have been based primarily on evidence from spoken languages. Placing ASL in such typologies leads us to the question of the role of iconicity in the representation of degree in ASL and how this relates to constructions in spoken language that may make use of pragmatic resources such as co-speech gesture. The results of the investigation of the comparative are then tested against the existence/structure of related degree constructions including measure phrases, differentials and degree questions.

Alan Turing and the Other Theory of Computation

Lenore Blum (Carnegie Mellon University)


Most logicians and theoretical computer scientists are familiar with Alan Turing’s 1936 seminal paper setting the stage for the foundational (discrete) theory of computation. Most however remain unaware of Turing’s 1948 seminal paper introducing the notion of condition, sets the stage for a natural theory of complexity for the “other theory of computation” emanating from the classical tradition of numerical analysis, equation solving and the continuous mathematics of calculus.
This talk will recognize Alan Turing’s work in the foundations of numerical computation (in particular, his 1948 paper “Rounding-Off Errors in Matrix Processes”), its influence in complexity theory today, and how it provides a unifying concept for the two major traditions of the Theory of Computation.

Stable, fractal-based processing of complex languages

Whit Tabor (UConn)

Apr 15, 2pm, LH 302

Consider an iterated map f on a connected space X. A proper subset, A, of X, is said to be asymptotically stable if, when the system is perturbed slightly away from A, it converges back on to A, under continued iteration of f. When one is working in the realm of logic, or classical computation more broadly, stability is not an issue and does not really make sense. There is no possibility of being slightly off—formulas are either well-formed and have precise meaning, or they are ill-formed and they have no meaning at all. Interestingly, in recent decades, a number of people have discovered ways of doing complex computation on connected spaces. This raises the question of whether there could be “stable computation” in such systems. In this talk, I define asymptotic stability for a particular type of connected space computer and show that at least one interesting class of languages (the mirror recursion languages) has a realization which exhibits asymptotic stability of certain fractal sets. One interesting outcome is an answer to the question: Why is there grammaticality?

Alternative logics and abductive methodology

Timothy Williamson (University of Oxford)

23 Mar, 4:30pm, Laurel 201.

The choice between alternative logics can be understood as a special case of theory choice in science, governed by broadly abductive criteria, without appeal to a special relation of logical consequence. This view will be applied to semantic and sorites paradoxes as putative motivations for departures from classical logic.

Objective possibilities

Timothy Williamson (University of Oxford)

22 Mar, 4:30pm, Oak 105.

The categories of metaphysical possibility and metaphysical possibility have been criticized as unscientific and unsuitable for serious theorizing. I will develop an alternative view on which they are limiting cases of a family of objective modalities many members of which are studied in natural science, and discuss the modal aspect of applied mathematics. Some consequences for the logic of metaphysical modality will be discussed.

What if the impossible happened?

Timothy Williamson (University of Oxford)

24 Mar, 4pm, Class of 1947 Room.

If 5+7 had turned out to be 13, everyone would have danced in the streets with joy.” That conditional is true according to standard theories of conditionals, simply because it is impossible for 5 + 7 to turn out to be 13. Many philosophers regard such consequences as obviously wrong. I will explain why significant issues are at stake in this dispute, how our judgments about such matters may be misled by fallible heuristics, and why the standard view may be right after all.