2 Sep 2016: Learning from and embedding probabilistic language

Dan Lassiter (Stanford)

Recent years have seen successful new applications of probabilistic semantics to epistemic modality, and there are signs of a revival of a classic probabilistic approach to conditionals. These models have a simple implementation using tools from intensional and degree semantics, and they have considerable appeal in terms of capturing inferences that are problematic for classical models in some cases. However, they generally fare worse than classical models in two key respects – they have trouble explaining the update/learning potential of epistemics and conditionals as well as their interpretations when embedded. Focusing on epistemic language, I’ll describe some of the reasons for the probabilistic turn, explain why embeddings and learning present a problem, and suggest a way to deal with the problems by adopting an enriched probabilistic semantics based on Bayesian networks. Time permitting, I’ll venture some speculations about attractions of, and challenges to, an extension to conditionals in the style of Kratzer 1986.