Essential nonlinear properties in neural decoding
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The sensory data about most natural task-relevant variables is confounded by task-irrelevant sensory variations, called nuisance variables. To be useful, the sensory signals that encode the relevant variables must be untangled from the nuisance variables through nonlinear recoding transformations, before the brain can use or decode them to drive behaviors. The information to be untangled is represented in the cortex by the activity of large populations of neurons, constituting a nonlinear population code.
In this thesis I provide three major contributions in theoretical neuroscience.
First, I provide a new way of thinking about nonlinear population codes and nuisance variables, leading to a theory of nonlinear feedforward decoding of neural population activity. This theory obeys fundamental mathematical limitations on information content that are inherited from the sensory periphery, producing redundant codes when there are many more cortical neurons than primary sensory neurons.
Second, and critically for experimental testing, I provide a theory that predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices if the brain uses its nonlinear population codes optimally: more informative patterns should be more correlated with choices. To validate this theory, I show that when primates discriminate between a wide or narrow distribution from which oriented images could be sampled, quadratic statistics of primary visual cortex activity match this predicted pattern.
Third, I contribute new concepts and methods to characterize behaviorally relevant nonlinear computation downstream of recorded neurons. Since many neural transformations can generate the same behavioral output, I will define a new concept of equivalence classes for neural transformations based on the degeneracy of the decoding. This suggests that we can understand the neural transformations by picking a convenient nonlinear basis that approximates the actual neural transformation up to an equivalence relation given by the intrinsic uncertainty, instead of trying to reproduce the biophysical details. Then I extend the concept of redundant codes to a more general scenario: when different subsets of neural response statistics contain limited information about the stimulus. This extension allows us understand the neural computation at the representational level --- extracting representations for different subsets of neural nonlinear statistics, characterizing how these representations transform the information about task-relevant variables and studying the coarse-grained computations on these representations.
Description
Advisor
Degree
Type
Keywords
Citation
Yang, Qianli. "Essential nonlinear properties in neural decoding." (2018) Diss., Rice University. https://hdl.handle.net/1911/105815.