Information Flow in Neural Circuits
thesisposted on 18.02.2022, 22:15 by Praveen VenkateshPraveen Venkatesh
Neuroscience has witnessed rapid advances in the last half century, with modern neurotechnologies now allowing us to record simultaneously from hundreds, if not thousands, of
neurons. It is conceivable that in the near future, we might be able to record from every single neuron in the brain, or in a subsystem—indeed, this is already the case for several
small animals. Would this alone suffice to help us obtain a complete understanding of the brain? Recent experiences with artificial neural networks suggests that this is not the case: even when we can “record” from every single node of a neural circuit, make interventions and understand learning rules, we may not truly understand a system. This calls for new theoretical and computational frameworks that are capable of providing objective explanations about how these neural circuits function. We need new tools for providing explanations at each of Marr’s levels, ultimately leading to an understanding of how neural mechanisms give rise to behavior. Despite the tremendous advances in
experimental neuroscience and neurotechnology, there is a considerable gap in our theoretical and computational capability to extract such an understanding. This thesis aims to address the aforementioned theoretical gap in one narrowly focused problem domain—information flow. Inferring the flows of information in healthy and diseased
states of the brain is essential in neuroscience because it could help us understand how the brain performs specific tasks. In particular, we require an understanding of information flow that enables us to (i) track the flows of one or more specific messages; (ii) capture how these
flows evolve over time, especially in feedback systems; (iii) draw meaningful interpretations about the underlying computations, and (iv) identify interventions that can modulate flows to treat brain diseases and disorders. Existing statistical tools used to infer information
flows are as yet far from being able to provide such insights.
This thesis provides a rigorous theoretical foundation for information flow which is designed to address the aforementioned requirements. The main contribution of this thesis is a systematic framework called M-information flow, which comprises a model of the brain tied to computation and a formal definition for information flow that satisfies our
intuition. Through simulations of neural circuits, it is also shown how this framework can be applied in practice, and further, how we can obtain a more granular understanding of
information representation by quantifying the unique, redundant and synergistic components of information about a message. This thesis also explores theoretical and empirical connections between M-information flow and the field of causal inference. Theoretically, alternative approaches to defining information flow using counterfactual measures are established. Empirically, experiments on artificial neural networks are used to demonstrate that the proposed measure of flow can inform interventions in simple settings. The results of these experiments indicate that the
M-information flow framework can supply the necessary interpretation for diagnosing and treating brain diseases and disorders. Lastly, this thesis considers the proposed framework in the context of existing tools used to infer information flow, such as Granger Causality. A counterexample based on communication in feedback networks is presented, wherein Granger Causality fails to infer the intuitive direction of information flow. The M-information flow framework, however, correctly recovers the expected direction of flow, while also providing deeper insight into the nature of the communication strategy. The thesis concludes with a discussion on the limitations of the proposed framework, along with potential prescriptions for overcoming some of these limitations through advances in neurotechnology.
DepartmentElectrical and Computer Engineering
- Doctor of Philosophy (PhD)