Combining Neural Population Recordings: Theory and Application
Modern electrophysiological and optical recording techniques allow for the simultaneous monitoring of large populations of neurons. However, current technologies are still limited in the total number of neurons they can simultaneously monitor, the brain areas they can be deployed in and in their ability to stably record from a given set of neurons. This thesis develops machine learning theory and algorithms which address these limitations by combining neural population recordings across time and space. We combine neural recordings by leveraging two types of structure present in neural activity: clustering and low-dimensional structure. We first use clustering in neural activity to develop selfrecalibrating brain-computer interface (BCI) classifiers. Modern BCI classifiers require daily recalibration, which may present a burden to future patients in a clinical setting. We show that by exploiting clustering in neural activity that arises in a classification setting we can develop self-recalibrating classifiers capable of stable performance over 26 and 31 days of prerecorded data without any supervised recalibration or retraining. In the remainder of the thesis, we develop theory and algorithms to exploit a second type of structure which commonly appears in neural recordings: low-dimensional structure. We first present novel matrix completion theory and an algorithm applicable to learning the full covariance matrix for a population of neurons recorded in overlapping blocks. The presented theory applies in general to completing low-rank symmetric positive semi-definite (SPSD) matrices, and we present sufficient and necessary conditions for this problem. We also show that matrix completion is possible under our sufficient conditions via nuclear norm minimization, a well known matrix completion technique. These conditions are notable as they apply when entries of a matrix are observed in a deterministic, structured manner and make no appeal to incoherence. We then extend our methods to problems beyond that of simply learning the joint covariance structure for a population of neurons. We describe how factor analysis (FA) models can be fit to the joint activity of an entire population of neurons recorded in blocks. Once fit, these models can be used to infer the low-dimensional state of the entire population of neurons at different moments in time, even when neural activity from only a subpopulation is observable at any particular point in time. Building from our matrix completion theory, we present an intuitive set of necessary conditions for fitting FA models in such settings. After developing the basic theory for fitting FA models in such settings, we validate our techniques in three application scenarios. First, we develop a self-recalibrating BCI regression algorithm, which is capable of maintaining steady performance in the face of simulated electrode instabilities. Second, we demonstrate that the progression of the low-dimensional state of a population of neurons can be tracked in a learning setting, even when it is impossible to record from a fixed set of neurons over the entire period of interest. Finally, we apply our methods to identify subspaces which are potentially important for communication in the activity of neural populations in two brain areas when it may only be possible to record from one neuron at a time in one of the brain areas.