Inferring Implicit Inference

Date
2019-12-05
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

One of the biggest challenges in theoretical neuroscience is to understand how the collective activity of neuronal populations generate behaviorally relevant computations. Repeating patterns of structure and function in the cerebral cortex suggest that the brain employs a repeating set of elementary or “canonical” computations. Neural representations, however, are distributed; so it remains an open challenge how to define these canonical computations, because the relevant operations are only indirectly related to single-neuron transformations. In this thesis, I present a theory-driven mathematical framework for inferring canonical computations from large-scale neural measurements. This work is motivated by one important class of cortical computation, probabilistic inference.

In the first part of the thesis, I develop the Neural Message Passing theory, which posits that the brain has a structured internal model of the world, and that it approximates probabilistic inference on this model using nonlinear message-passing implemented by recurrently connected neural population codes.

In the second part of the thesis, I present Inferring Implicit Inference, a principled framework for inferring canonical computations from large-scale neural data that is based on the theory of neural message passing. This general data analysis framework simultaneously finds (i) the neural representation of relevant variables, (ii) interactions between these latent variables that define the brain's internal model of the world, and (iii) canonical message-functions that specify the implicit computations.

As a concrete demonstration of this framework, I analyze artificial neural recordings generated by a model brain that implicitly implements advanced mean-field inference. Given external inputs and noisy neural activity from the model brain, I successfully estimate the latent dynamics and canonical parameters that explain the simulated measurements. Analysis of these models reveal certain features of experiment design required to successfully extract canonical computations from neural data. In this first example application, I used a simple polynomial basis to characterize the latent canonical transformations. While this construction matched the true model, it is unlikely to capture a real brain's nonlinearities efficiently. To address this, I develop a general, flexible variant of the framework based on Graph Neural Networks, to infer approximate inferences with known neural embedding.

Finally, I develop a computational pipeline to analyze large-scale recordings from the mouse visual cortex generated in response to naturalistic stimuli designed to highlight the influence of lateral connectivity. The first practical application of this framework did not reveal any compelling influences of lateral connectivity. However, these preliminary results provide valuable insights about which assumptions in our underlying models and which aspects of experiment design should be refined to reveal canonical properties of the brain's distributed nonlinear computations.

Description
Degree
Doctor of Philosophy
Type
Thesis
Keywords
neural message passing, probabilistic inference, probabilistic population codes
Citation

Vasudeva Raju, Rajkumar. "Inferring Implicit Inference." (2019) Diss., Rice University. https://hdl.handle.net/1911/107811.

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page