Browsing by Author "Ecker, Alexander S."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Deep convolutional models improve predictions of macaque V1 responses to natural images(Public Library of Science, 2019) Cadena, Santiago A.; Denfield, George H.; Walker, Edgar Y.; Gatys, Leon A.; Tolias, Andreas S.; Bethge, Matthias; Ecker, Alexander S.Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.Item Diverse task-driven modeling of macaque V4 reveals functional specialization towards semantic tasks(Public Library of Science, 2024) Cadena, Santiago A.; Willeke, Konstantin F.; Restivo, Kelli; Denfield, George; Sinz, Fabian H.; Bethge, Matthias; Tolias, Andreas S.; Ecker, Alexander S.Responses to natural stimuli in area V4—a mid-level area of the visual ventral stream—are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 (and V1 for comparison) by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D types of tasks. We found that V4 was best predicted by semantic classification features and exhibited high task selectivity, while the choice of task was less consequential to V1 performance. Consistent with traditional characterizations of V4 function that show its high-dimensional tuning to various 2D and 3D stimulus directions, we found that diverse non-semantic tasks explained aspects of V4 function that are not captured by individual semantic tasks. Nevertheless, jointly considering the features of a pair of semantic classification tasks was sufficient to yield one of our top V4 models, solidifying V4’s main functional role in semantic processing and suggesting that V4’s selectivity to 2D or 3D stimulus properties found by electrophysiologists can result from semantic functional goals.