Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Vasudeva Raju, Rajkumar"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Inference by Reparameterization using Neural Population Codes
    (2015-12-04) Vasudeva Raju, Rajkumar; Pitkow, Xaq; Aazhang, Behnaam; Ernst, Philip; Josic, Kresimir
    Behavioral experiments on humans and animals suggest that the brain performs probabilistic inference to interpret its environment. Here we present a general-purpose, biologically plausible implementation of approximate inference based on Probabilistic Population Codes (PPCs). PPCs are distributed neural representations of probability distributions that are capable of implementing marginalization and cue-integration in a biologically plausible way. By connecting multiple PPCs together, we can naturally represent multivariate probability distributions, and capture the conditional dependency structure by setting those connections as in a probabilistic graphical model. To perform inference in general graphical models, one convenient and often accurate algorithm is Loopy Belief Propagation (LBP), a ‘message-passing’ algorithm that uses local marginalization and integration operations to perform approximate inference efficiently even for complex models. In LBP, a message from one node to a neighboring node is a function of incoming messages from all neighboring nodes, except the recipient. This exception renders it neurally implausible because neurons cannot readily send many different signals to many different target neurons. Interestingly, however, LBP can be reformulated as a sequence of Tree-based Re-Parameterization (TRP) updates on the graphical model which re-factorizes a portion of the probability distribution. Although this formulation still implicitly has the message exclusion problem, we show this can be circumvented by converting the algorithm to a nonlinear dynamical system with auxiliary variables and a separation of time-scales. By combining these ideas, we show that a network of PPCs can represent multivariate probability distributions and implement the TRP updates for the graphical model to perform probabilistic inference. Simulations with Gaussian graphical models demonstrate that the performance of the PPC-based neural network implementation of TRP updates for probabilistic inference is comparable to the direct evaluation of LBP, and thus provides a compelling substrate for general, probabilistic inference in the brain.
  • Loading...
    Thumbnail Image
    Item
    Inferring Implicit Inference
    (2019-12-05) Vasudeva Raju, Rajkumar; Pitkow, Xaq
    One of the biggest challenges in theoretical neuroscience is to understand how the collective activity of neuronal populations generate behaviorally relevant computations. Repeating patterns of structure and function in the cerebral cortex suggest that the brain employs a repeating set of elementary or “canonical” computations. Neural representations, however, are distributed; so it remains an open challenge how to define these canonical computations, because the relevant operations are only indirectly related to single-neuron transformations. In this thesis, I present a theory-driven mathematical framework for inferring canonical computations from large-scale neural measurements. This work is motivated by one important class of cortical computation, probabilistic inference. In the first part of the thesis, I develop the Neural Message Passing theory, which posits that the brain has a structured internal model of the world, and that it approximates probabilistic inference on this model using nonlinear message-passing implemented by recurrently connected neural population codes. In the second part of the thesis, I present Inferring Implicit Inference, a principled framework for inferring canonical computations from large-scale neural data that is based on the theory of neural message passing. This general data analysis framework simultaneously finds (i) the neural representation of relevant variables, (ii) interactions between these latent variables that define the brain's internal model of the world, and (iii) canonical message-functions that specify the implicit computations. As a concrete demonstration of this framework, I analyze artificial neural recordings generated by a model brain that implicitly implements advanced mean-field inference. Given external inputs and noisy neural activity from the model brain, I successfully estimate the latent dynamics and canonical parameters that explain the simulated measurements. Analysis of these models reveal certain features of experiment design required to successfully extract canonical computations from neural data. In this first example application, I used a simple polynomial basis to characterize the latent canonical transformations. While this construction matched the true model, it is unlikely to capture a real brain's nonlinearities efficiently. To address this, I develop a general, flexible variant of the framework based on Graph Neural Networks, to infer approximate inferences with known neural embedding. Finally, I develop a computational pipeline to analyze large-scale recordings from the mouse visual cortex generated in response to naturalistic stimuli designed to highlight the influence of lateral connectivity. The first practical application of this framework did not reveal any compelling influences of lateral connectivity. However, these preliminary results provide valuable insights about which assumptions in our underlying models and which aspects of experiment design should be refined to reveal canonical properties of the brain's distributed nonlinear computations.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892