Discovering cyclic causal structure
1996-01-01T00:00:00Z (GMT) by
Abstract: "This paper is concerned with the problem of making causal inferences from observational data, when the underlying causal structure may involve feedback loops. In particular, making causal inferences under the assumption that the causal system which generated the data is linear and that there are no unmeasured common causes (latent variables). Linear causal structures of this type can be represented by non-recursive linear structural equation models. I present a correct polynomial time (on sparse graphs) discovery algorithm for linear cyclic models that contain no latent variables. This algorithm outputs a representation of a class of non-recursive linear structural equation models, given observational data as input. Under the assumption that all conditional independencies found in the observational data are true for structural reasons rather than because of particular parameter values, the algorithm discovers causal features of the structure which generated the data. A simple modification of the algorithm can be used as a decision procedure (whose runtime is polynomial in the number of vertices) for determining when two directed graphs (cyclic or acyclic) entail the same set of conditional independence relations. After proving that the algorithm is correct I then show that it is also complete in the sense that if two linear structural equation models are used as conditional independence 'oracles' for the discovery algorithm, then the algorithm will give the same output only if every conditional independence entailed by one model is entailed by the other and vice versa. Another way of saying this is that the algorithm can be used as a decision procedure for determining Markov equivalence of directed cyclic graphs; if the conditional independencies associated with two cyclic graphs result in the same output from the algorithm, when used as input, then the two graphs are equivalent."