Disentangling conditional dependencies

Nicole Cruz (with Michael Lee)

People draw on event co-occurrences as a foundation for causal and scientific inference, but in which ways can events co-occur? Statistically, one can express a dependency between events A and C as P(C|A) != P(C), but this relation can be further specified in a variety of ways, particularly when A and/or C are themselves conditional events. In the psychology of reasoning, the conditional P(C|A) is often thought to become biconditional when people add the converse, P(A|C), or inverse, P(not-C|not-A), or both, with the effects of these additions largely treated as equivalent. In contrast, from a coherence based logical perspective it makes a difference whether the converse or the inverse is added, and in what way. In particular, the addition can occur by forming the conjunction of two conditionals, or by merely constraining their probabilities to be equal. Here we outline four distinct ways of defining biconditional relationships, illustrating their differences by how they constrain the conclusion probabilities of a set of inference forms. We present a Bayesian latent-mixture model with which the biconditionals can be dissociated from one another, and discuss implications for the interpretation of empirical findings in the field.