File(s) under embargo

Reason: patent application





until file(s) become available

Neural variability: structure, sources, control, and data augmentation

posted on 09.12.2021, 15:51 by Akash UmakanthaAkash Umakantha
Variability is an important aspect of neural systems, both in the brain and in artificial networks. In the brain, neurons respond differently from trial to trial, even to repeated presentations of the exact same stimulus and this variability is often correlated across neurons. Previous work has
posited that shared trial-to-trial variability (i.e., correlated neuronal variability) is behaviorally relevant and could have important implications for computations and information encoding. In the first three sections of this thesis, I present work to further the understanding of shared variability in the brain. To better understand the structure of shared variability, we related pairwise neuronal correlations to population dimensionality reduction methods. To investigate volitional control of shared variability in non-motor brain areas, we designed a brain computer interface for prefrontal cortex. Finally, to elucidate sources of variability, we developed a method called pCCA-FA to partition local (i.e., single brain area) and global (i.e., brain-wide) factors that contribute to shared variability. Variability also plays an important role in learning, in both the brain and in artificial neural networks (i.e., deep learning). Data augmentation increases the
size, quality, and variability of datasets for improved training of deep learning models. In the final section, we empirically evaluated how different augmentation setups perform for different model architectures for image classification. We introduced a new augmentation, called StyleAug, which outperforms other state-of-the-art augmentations for training vision transformers (ViTs). Overall, this dissertation furthers the understanding of variability in both natural and artificial
neural systems. For artificial neural networks, this work highlights that one should consider different types of training data variability (i.e., augmentations) for different model architectures. For neuroscience, this work advances the understanding of the structure of shared neuronal
variability, its distinct sources, and to what degree it can be controlled.




Degree Type



Neuroscience Institute

Degree Name

  • Doctor of Philosophy (PhD)


Matthew Smith Byron Yu

Usage metrics