posted on 1995-03-01, 00:00authored byMarek J. Druzdzel, Clark Glymour
A general principle for good pedagogic strategy is this: other things equal, make the essential principles of the subject explicit rather than tacit. We think that this principle is routinely violated in conventional instruction in statistics. Even though most of the early history of probability theory has been driven by causal considerations, the terms “cause” and “causation” have practically disappeared from statistics textbooks. Statistics curricula guide students away from the concept of causality, into remembering perhaps the cliche
disclaimer “correlation does not mean causation,” but rarely thinking about what correlation does mean. The treatment of causality is a serious handicap to later studies of such topics as experimental design, where often the main goal is to establish (or disprove) causation. Much of the technical vocabulary of the language
used in research design textbooks consists in euphemisms for specific causal relations, e.g, “latent variable,” “intervening variable,” “confounding factor,” etc. The multiplicity of terms used to refer to causation results in confusion and, in effect, may hinder understanding of the basic principles of research design. Defining causality is at least as hard as defining probability; neither idea requires a philosophically strict definition in order for students to learn how to use the notions well. As with an understanding of probability, an understanding of causality is best promoted by examples and formal principles in combination.
In this paper, we describe our efforts to reintroduce causation in teaching research design. We show how causal graphs can be used to explain various elements of empirical research, and we report our classroom experiences with this approach.