Browsing by Author "Yang, Qianli"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Essential nonlinear properties in neural decoding(2018-06-04) Yang, Qianli; Pitkow, XaqThe sensory data about most natural task-relevant variables is confounded by task-irrelevant sensory variations, called nuisance variables. To be useful, the sensory signals that encode the relevant variables must be untangled from the nuisance variables through nonlinear recoding transformations, before the brain can use or decode them to drive behaviors. The information to be untangled is represented in the cortex by the activity of large populations of neurons, constituting a nonlinear population code. In this thesis I provide three major contributions in theoretical neuroscience. First, I provide a new way of thinking about nonlinear population codes and nuisance variables, leading to a theory of nonlinear feedforward decoding of neural population activity. This theory obeys fundamental mathematical limitations on information content that are inherited from the sensory periphery, producing redundant codes when there are many more cortical neurons than primary sensory neurons. Second, and critically for experimental testing, I provide a theory that predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices if the brain uses its nonlinear population codes optimally: more informative patterns should be more correlated with choices. To validate this theory, I show that when primates discriminate between a wide or narrow distribution from which oriented images could be sampled, quadratic statistics of primary visual cortex activity match this predicted pattern. Third, I contribute new concepts and methods to characterize behaviorally relevant nonlinear computation downstream of recorded neurons. Since many neural transformations can generate the same behavioral output, I will define a new concept of equivalence classes for neural transformations based on the degeneracy of the decoding. This suggests that we can understand the neural transformations by picking a convenient nonlinear basis that approximates the actual neural transformation up to an equivalence relation given by the intrinsic uncertainty, instead of trying to reproduce the biophysical details. Then I extend the concept of redundant codes to a more general scenario: when different subsets of neural response statistics contain limited information about the stimulus. This extension allows us understand the neural computation at the representational level --- extracting representations for different subsets of neural nonlinear statistics, characterizing how these representations transform the information about task-relevant variables and studying the coarse-grained computations on these representations.Item Nonlinear neural codes(2015-12-03) Yang, Qianli; Pitkow, Xaq; Aazhang, Behnaam; Johnson, Don H.; Baraniuk, Richard G.; Tolias, AndreasMost natural task-relevant variables are encoded in the early sensory cortex in a form that can only be decoded nonlinearly. Yet despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the most relevant existing quantitative model of nonlinear codes is inconsistent with known architectural features of the brain. In particular, for large population sizes, such a code would contain more information than its sensory inputs, in violation of the data processing inequality. In this model, the noise correlation structures provide the population with an information content that scales with the size of the cortical population. This correlation structure could not arise in cortical populations that are much larger than their sensory input populations. Here we provide a better theory of nonlinear population codes that obeys the data processing inequality by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is limited by correlated noise or by downstream suboptimality. Finally, we discuss simple sensory tasks likely to require approximately quadratic decoding, to which our theory applies.Item Revealing nonlinear neural decoding by analyzing choices(Springer Nature, 2021) Yang, Qianli; Walker, Edgar; Cotton, R. James; Tolias, Andreas S.; Pitkow, XaqSensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.