Author: damir

Computable reducibility and the Baire hierarchy of functions

Linda Brown Westrick

Various notions of computable reducibility, such as Turing reduction, truth table reduction, and many-one reduction, provide coarse- and fine-grained ways of saying that one infinite sequence can compute another one. Infinite sequences have discrete bits with discrete addresses, so what it means to “compute” one is clear: given an address as input, an algorithm should return the appropriate bit. What it means to “compute” a continuous function is less obvious, but also well established. However, for a larger class of functions (in particular those of the Baire hierarchy, which I will define) it is not at all clear what it means to compute one. I will present one possibility and describe its degree structure and how this structure relates to features of the Baire hierarchy. This is joint work with Adam Day and Rod Downey.

Modality and Classical Logic

Melissa Fusco

My favored joint solution to the Puzzle of Free Choice Permission (Kamp 1973) and Ross’s Paradox (Ross 1941) involves (i) giving up the duality of natural language deontic modals, and (ii) moving to a two-dimensional propositional logic which has a classical Boolean character only as a special case. In this talk, I’d like to highlight two features of this radical view: first, the extent to which Boolean disjunction is imperiled by other natural language phenomena not involving disjunction, and second, the strength of the general position that natural language semantics must treat deontic, epistemic, and circumstantial modals alike.

10 Feb 2017: Noes and Nots

Shay Logan

A growing number of philosophers of logic have suggested that logic is best analyzed in terms of acceptance and rejection. In several influential papers, it has been proposed that we understand acceptance in terms of answering a yes-or-no question with `yes’ and understand rejection in terms of answering a yes-or-no question `no’. In this talk I will examine question-answer setups of this sort very generally, and use them to examine what sorts of negations can be endorsed by a deflationist about negation.

27 Jan 2017: Computing with numbers and other non-syntactic things: de re knowledge of abstract objects

Stewart Shapiro

Michael Rescorla has argued that it makes sense to compute directly with numbers, and he faulted Turing for not giving an analysis of number-theoretic computability. However, in line with a later paper of his, it only makes sense to compute directly with syntactic entities, such as strings on a given alphabet. Computing with numbers goes via notation. This raises broader issues involving de re propositional attitudes towards numbers and other non-syntactic abstract entities.

18 Nov 2016: Why Did Geometers Stop Using Diagrams?

Jeremy Heis

The consensus for the last century or so has been that diagrammatic proofs are not genuine proofs. Recent philosophical work, however, has shown that (at least in some circumstances) diagrams can be perfectly rigorous. The implication of this work is that, if diagrammatic reasoning in a particular field is illegitimate, it must be so for local reasons, not because of some in-principle illegitimacy of diagrammatic reasoning in general. In this talk, I try to identify some of the reasons why geometers in particular began to reject diagrammatic proofs. I argue that the reasons often cited nowadays — that diagrams illicitly infer from a particular to all cases, or can’t handle analytic notions like continuity — played little role in this development. I highlight one very significant (but rarely discussed) flaw in diagrammatic reasoning: diagrammatic methods don’t allow for fully general proofs of theorems. I explain this objection (which goes back to Descartes), and how Poncelet and his school developed around 1820 new diagrammatic methods to meet this objection. As I explain, these new methods required a kind of diagrammatic reasoning that is fundamentally different from the now well-known diagrammatic method from Euclid’s Elements. And, as I show (using the case of synthetic treatments of the duals of curves of degrees higher than 2), it eventually became clear that this method does not work. Truly general results in “modern” geometry could not be proven diagrammatically.

21 Oct 2016: Counting uses of a theorem

Jeff Hirst

Many proofs use a bootstrapping process. Initially, a simple version is proved, and then extensions are proved by repeated applications of the simple result. Proofs of many results about graph colorings use this strategy. We will concentrate on a specific example. Suppose we write RT(2,n) for Ramsey’s theorem for pairs and n colors, a graph coloring theorem. (Pictures will be provided.) The usual proof of RT(2,4) uses two applications of RT(2,2). In this talk, we address the question: Can RT(2,4) be proved with one use of RT(2,2)? Of course, the answer depends on the base system chosen and the formalization of what constitutes a use. In some settings, a formalization of Weihrauch reductions can lend insight, and the law of the excluded middle plays a surprising role.

9 Sep 2016: Reasoning and Consequence for Indicative Conditionals

Paolo Santorio

A number of inference patterns involving epistemic modalities, including Modus Ponens, display a split behavior. They seem valid by the lights of all-or-nothing judgments: whenever the premises are accepted as true, the conclusion must be accepted as true as well. But they seem invalid by the lights of probabilistic judgments: we can specify intuitive credal assignments where the drop in probability between premises and conclusions is incompatible with validity. I suggest that we explain this asymmetry by endorsing two bridge principles between logical notions and rational constraints on attitudes. The first links a notion of classical consequence to rational constraints on probabilistic states. The second links a broadly dynamic notion of consequence (so-called informational consequence) to rational constraints on acceptance. In existing literature, classical and informational consequence are seen as alternatives; I argue instead that we can and should have both.

2 Sep 2016: Learning from and embedding probabilistic language

Dan Lassiter (Stanford)

Recent years have seen successful new applications of probabilistic semantics to epistemic modality, and there are signs of a revival of a classic probabilistic approach to conditionals. These models have a simple implementation using tools from intensional and degree semantics, and they have considerable appeal in terms of capturing inferences that are problematic for classical models in some cases. However, they generally fare worse than classical models in two key respects – they have trouble explaining the update/learning potential of epistemics and conditionals as well as their interpretations when embedded. Focusing on epistemic language, I’ll describe some of the reasons for the probabilistic turn, explain why embeddings and learning present a problem, and suggest a way to deal with the problems by adopting an enriched probabilistic semantics based on Bayesian networks. Time permitting, I’ll venture some speculations about attractions of, and challenges to, an extension to conditionals in the style of Kratzer 1986.

The representation of degree in American Sign Language

Jon Gajeweski (UConn)

22 Apr, 2pm, LH 302

In this talk, I will discuss whether and how degree is represented in American Sign Language. I will begin by attempting to place American Sign Language into typologies of comparative constructions that have been based primarily on evidence from spoken languages. Placing ASL in such typologies leads us to the question of the role of iconicity in the representation of degree in ASL and how this relates to constructions in spoken language that may make use of pragmatic resources such as co-speech gesture. The results of the investigation of the comparative are then tested against the existence/structure of related degree constructions including measure phrases, differentials and degree questions.

Alan Turing and the Other Theory of Computation

Lenore Blum (Carnegie Mellon University)


Most logicians and theoretical computer scientists are familiar with Alan Turing’s 1936 seminal paper setting the stage for the foundational (discrete) theory of computation. Most however remain unaware of Turing’s 1948 seminal paper introducing the notion of condition, sets the stage for a natural theory of complexity for the “other theory of computation” emanating from the classical tradition of numerical analysis, equation solving and the continuous mathematics of calculus.
This talk will recognize Alan Turing’s work in the foundations of numerical computation (in particular, his 1948 paper “Rounding-Off Errors in Matrix Processes”), its influence in complexity theory today, and how it provides a unifying concept for the two major traditions of the Theory of Computation.