Browsing by Author "Boominathan, Lokesh"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Embargo Cost of Computation(2024-08-06) Boominathan, Lokesh; Pitkow, XaqThe brain's computations are constrained by factors such as metabolic expenses for neural activity and signal noise. In this thesis, we investigate how the brain performs complex tasks under such constraints. We focus on two specific tasks to explore these computational costs. First, we analyze the brain's process of making inferences from ambiguous sensory information. This task involves optimizing inference performance while considering the energy cost of transmitting reliable information between different cortical regions. We found that for sensory inputs that are sufficiently predictable, it is advantageous to send predictions from higher to lower cortical areas to conserve energy. However, when signals are harder to predict, it becomes best to send the raw sensory input directly from lower to higher cortical regions. We demonstrate how the required predictability for sending predictions changes according to different computational constraints. Second, we explore a task where attentiveness is required to earn rewards but incurs a cost. We aim to understand how the brain balances reducing attention costs against obtaining rewards. To do this, we propose a reinforcement learning-based normative model to determine how to strategically deploy attention, and how it varies with task utility and signal statistics. Our model suggests that efficient attention involves alternating blocks of high and low attention. In extreme cases, where sensory input is quite weak during low attention states, we see that high attention is used rhythmically.Item Inference as Control predicts Phase transitions in when Feedback is useful(2021-08-09) Boominathan, Lokesh; Pitkow, XaqSensory observations about the world are invariably ambiguous. Inference about the world's latent variables is thus an important computation for the brain. However, computational constraints limit the performance of these computations. These constraints include energetic costs for neural activity and noise for every channel. Efficient coding is a prominent theory that describes how limited resources can be used best. In one incarnation, this leads to a theory of predictive coding, where predictions are subtracted from signals, reducing the cost of sending something that is already known. This theory does not, however, account for the costs or noise associated with those predictions. Here we offer a theory that accounts for both feedforward and feedback costs, and noise in all computations. We formulate this inference problem as message-passing on a graph whereby feedback is viewed as a control signal aiming to maximize how well an inference tracks a target state while minimizing the costs of computation. We apply this novel formulation of inference as control to the canonical problem of inferring the hidden scalar state of a linear dynamical system with Gaussian variability. Our theory predicts the gain of optimal predictive feedback and how it is incorporated into the inference computation. We show that there is a non-monotonic dependence of optimal feedback gain as a function of both the computational parameters and the world dynamics, and we reveal phase transitions in whether feedback provides any utility in optimal inference under computational costs.