Category theory has proven to be applicable across all of mathematics. In some sense this is not surprising because category theory was created for the purpose of application (specifically, application to algebraic topology). But I will argue that the significance of category theory extends past its applicability — in particular, there is a significant explanatory benefit. The question of what constitutes a mathematical explanation is of perennial interest to philosophers. Reflection on category theory’s unique role in mathematics unearths some features of mathematical explanation that are not often made explicit and that philosophers have tended not to notice.
There are many ways that category theoretic methods provide explanations. For instance, important results in different areas of mathematics are unified by the fact that they are all corollaries of the same category theoretic theorem, such as the theorem that right adjoints preserve limits. Or consider the ways of defining structures in category theory with universal properties — the whole perspective sheds light on how constructions from different domains are related to one another. The categorical product for instance, unites many seemingly unrelated mathematical constructions such as the Cartesian product, union, and conjunction. Such examples introduce both generalization and unification within mathematics. Moreover, this unification allows for meaningful and surprising mathematical analogies to arise. These gen- eralizations and analogies are explanatory and result from the structural features of category theory.
In order to highlight the explanatory value of category theory, I will first provide a characterization of the structure unique to category theory. It is this structure that makes category theory apt for producing explanations. With a clear picture of category theoretic structure, I will present a few examples that illustrate how category theory proves to be explanatory — in particular, how the structural features of category theory are explanatory.
Ole T. Hjortland & Ben Martin
According to logical anti-exceptionalism we come to be justified in believing logical theories by similar means to scientific theories. This is often explained by saying that theory choice in logic proceeds via abductive arguments (Priest, Russell, Williamson, Hjortland). Thus, the success of classical and non-classical theories of validity are compared by their ability to explain the relevant data. However, as of yet there is no agreed upon account of which data logical theories must explain, and subsequently how they prove their mettle. In this paper, we provide a non-causal account of logical explanation, and show how it can accommodate important disputes about logic.
Andrew Tedder (joint work with Stewart Shapiro)
We consider a handful of solutions to the liar paradox which admit a naive truth predicate and employ a non-classical logic, and which include a proposal for classical recapture. Classical recapture is essentially the property that the paradox solvent (in this case, the non-classical interpretation of the connectives) only affects the portion of the language including the truth predicate – so that the connectives can be interpreted classically in sentences in which the truth predicate does not occur.
We consider a variation on this theme where the logic to be recaptured is not classical but rather intuitionist logic, and consider the extent to which these handful of solutions to the liar admit of intuitionist recapture by sketching potential ways of altering their various methods for classical recapture to suit an intuitionist framework.
I introduce a typical experimental task in psycholinguisticsself—paced reading—and show how to build end-to-end simulations of a human participant in such an experiment; end-to-end means that we model visual and motor processes together with specifically linguistic processes (syntactic and semantic parsing) in a complete model of the experimental task. The model embeds theoretical hypotheses about linguistic representations and parsing processes in an independently motivated cognitive architecture (ACT-R). In turn, the resulting cognitive models can be embedded in Bayesian models to fit them to experimental data, estimate their parameters and perform quantitative model comparison for qualitative theories.
Unveiling the constructive core of classical theories: A contribution to 90 years of Glivenko’s theorem
Glivenko’s well known result of 1929 established that a negated propositional formula provable in classical logic is even provable intuitionistically. Similar later transfers from classical to intuitionistic provability therefore fall under the nomenclature of Glivenko-style results: these are results about classes of formulas for which classical provability yields intuitionistic provability. The interest in isolating such classes lies in the fact that it may be easier to prove theorems by the use of classical rather than intuitionistic logic. Further, since a proof in intuitionistic logic can be associated to a lambda term and thus obtain a computational meaning, such results have more recently been gathered together under the conceptual umbrella “computational content of classical theories.” They also belong to a more general shift of perspective in foundations: rather than developing constructive mathematics separately, as in Brouwer’s program, one studies which parts of classical mathematics can be directly translated into constructive terms.
We shall survey how Glivenko-style results can be easily obtained by the choice of suitable sequent calculi for classical and intuitionistic logic, by the conversion of axioms into inference rules, and by the procedure of geometrization of first order logic.
Recursive counterexamples are classical mathematical theorems that are made false by restricting their quantifiers to computable objects. Historically, they have been important for analysing the mathematical limitations of foundational programs such as constructivism or predicativism. For example, the least upper bound principle is recursively false, and thus unprovable by constructivists. In this talk I will argue that while recursive counterexamples are valuable for analysing foundational positions from an external, set-theoretic point of view, the approach is limited in its applicability because the results themselves are not accessible to the foundational standpoints under consideration. This limitation can be overcome, to some extent, by internalising the recursive counterexamples within a theory acceptable to a proponent of a given foundation; this is, essentially, the method of reverse mathematics. I will examine to what extent the full import of reverse mathematical results can be appreciated from a given foundational standpoint, and propose an analysis based on an analogy with Brouwer’s weak and strong counterexamples. Finally, I will argue that, at least where the reverse mathematical analysis of foundations is concerned, the epistemic considerations above show that reverse mathematics should be carried out within a weak base theory such as RCA0, rather than by studying ω-models from a set-theoretic perspective.
In this talk, I will apply the de re/de dicto distinction to the analysis of mathematical statements and knowledge claims in mathematics. A proof will be said to provide de dicto knowledge of a mathematical statement if it provides knowledge of a purely existential statement, and to provide de re knowledge when it carries additional information concerning the identity criteria for the objects that are proven to exist. I will examine two case studies, one from abstract algebra and one from discrete mathematics, and I will suggest that reverse mathematics can help measuring the ‘de re content’ of two different proofs of the same theorem, and that the de re/de dicto distinction introduced here lines up with certain model theoretic properties of subsystems of second order arithmetic, such as the existence of certain kinds of minimal model. Furthermore, I will argue that there are good reasons not to identify the de re content of a proof with its constructive content nor with its predicative content.
David Lewis (and others) have famously argued against Adams’s Thesis (that the probability of a conditional is the conditional probability of its consequent, given it antecedent) by proving various “triviality results.” In this paper, I argue for two theses — one negative and one positive. The negative thesis is that the “triviality results” do not support the rejection of Adams’s Thesis, because Lewisian “triviality based” arguments against Adams’s Thesis rest on an implausibly strong understanding of what it takes for some credal constraint to be a rational requirement (an understanding which Lewis himself later abandoned in other contexts). The positive thesis is that there is a simple (and plausible) way of modeling the probabilities of conditionals, which (a) obeys Adams’s Thesis, and (b) avoids all of the existing triviality results.
In addition to verba dicendi, languages have a bunch of different other grammatical devices for encoding reported speech. While not common in Indo-European languages, two of the most common such elements cross-linguistically are reportative evidentials and quotatives. Quotatives have been much less discussed then either verba dicendi or reportatives, both in descriptive/typological literature and especially in formal semantic work. While quotatives haven’t been formally analyzed in detail previously to my knowledge, several recent works on reported speech constructions in general have suggested in passing that they pattern either with verba dicendi or with reportatives. Drawing on data from Yucatec Maya, I argue that they differ from both since they present direct quotation (like verba dicendi) but make a conventional at-issueness distinction (like reportatives). To account for these facts, I develop an account of quotatives by combining an extended Farkas & Bruce 2010-style discourse scoreboard with bicontextualism (building on Eckardt 2014’s work on Free Indirect Discourse).
Logic is Contractionless and Relevant, but Logic is (Accidentally) Contractionless and Relevant: An Introduction to Deep Fried Logic
Logic, according to Beall, is the universal entailment relation. I claim that this forces us to accept that logic is contractionless and relevant. But neither relevance nor contraction-freedom, important as these features have been in the literature on logic and its philosophy, play a role in my argument. Instead, they are emergent features — logical accidents, if you will. Along the way I will familiarize us with a novel (and delicious) semantic theory that I call deep fried semantics.