What natural language meanings can tell us about natural logic’s contents
Tyler Knowlton
In contrast to invented logics, which often call for theoretical vocabulary that seems cognitively difficult to master, natural logic is concerned with the foundational building blocks of logical thought that humans naturally/automatically/innately have access to. In this talk, I’ll suggest a new way of getting at the question of what’s in natural logic: investigating the fine detail of the semantic instructions specified by linguistic expressions. Two psycholinguistic case studies in particular suggest potentially counter-intuitive conclusions about the contents of natural logic. First, I’ll use experimental evidence to argue that sentences like “most frogs are green” are understood by speakers in terms of cardinality and subtraction (“the number of green frogs is greater than the total number of frogs minus the number of green ones”) and not in terms of predicate negation (“the green frogs outnumber the non-green frogs”). Combined with typological evidence that natural language seems to eschew predicate negation elsewhere, this finding supports a recent proposal that natural logic lacks a notion predicate negation/set complementation. Second, I’ll use experimental evidence to argue that sentences like “every frog is green” are understood by speakers in terms of applying a predicate to a restricted domain (“the frogs are such that ‘green’ applies universally”) and not in terms of relating two independent sets (“the frogs are a subset of the green things”). Moreover, hypothetical quantifiers whose meanings would require specification in terms of genuine set-theoretic relations seem to be unlearnable by adults and children. These findings suggest that natural logic has the resources to support second-order quantification, but not second-order relations.