Towards Inference and Learning in Dynamic Bayesian Networks using Generalized Evidence
This report introduces a novel approach to performing inference and learning in Dynamic Bayesian Networks (DBN). The traditional approach to inference and learning in DBNs involves conditioning on one or more finite-length observation sequences. In this report, we consider conditioning on what we will call generalized evidence, which consists of a possibly infinite set of behaviors compactly encoded in the form of a formula, Φ , in temporal logic. We then introduce exact algorithms for solving inference problems (i.e., computing Ρ(Χ│Φ)) and learning problems (i.e., computing Ρ(Θ|Φ)) using techniques from the field of Model Checking. The advantage of our approach is that it enables scientists to pose and solve inference and learning problems that cannot be expressed using traditional approaches. The contributions of this report include: (1) the introduction of the inference and learning problems over generalized evidence, (2) exact algorithms for solving these problems for a restricted class of DBNs, and (3) a series of case studies demonstrating the scalability of our approach. We conclude by discussing directions for future research.