Logic Done as if Inference in Language Mattered

Larry Moss

Our topic is logical inference in natural language, as it is done by people and computers.
The first main topic will be monotonicity inference, arguably the best of the simple ideas
in the area. Monotonicity can be incorporated in running systems whereby one can take
parsed real-life sentences and see simple inferences in action. I will present some of the
theory, related to higher-order monotonicity and the syntax-semantics interface offered by
categorial grammar.

In a different direction, these days monotonicity inference can be done by machines as well
as humans. The talk also discusses this development along with some ongoing work on the
borderline of natural logic and machine learning.

The second direction in the talk will be an overview of the large number of logical systems for
various linguistic phenomena. This work begins as an updating of traditional syllogistic logic,
but with much greater expressive power.

Overall, the goal of the talk is to persuade you that the research program of “natural logic”
leads to a lively research area with connections to many areas both inside and outside of more
mainstream areas of logic.