Deep convolutional models improve predictions of macaque V1 responses to natural images

dc.citation.articleNumbere1006897en_US
dc.citation.issueNumber4en_US
dc.citation.journalTitlePLoS Computational Biologyen_US
dc.citation.volumeNumber15en_US
dc.contributor.authorCadena, Santiago A.en_US
dc.contributor.authorDenfield, George H.en_US
dc.contributor.authorWalker, Edgar Y.en_US
dc.contributor.authorGatys, Leon A.en_US
dc.contributor.authorTolias, Andreas S.en_US
dc.contributor.authorBethge, Matthiasen_US
dc.contributor.authorEcker, Alexander S.en_US
dc.date.accessioned2021-12-17T20:08:21Zen_US
dc.date.available2021-12-17T20:08:21Zen_US
dc.date.issued2019en_US
dc.description.abstractDespite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.en_US
dc.identifier.citationCadena, Santiago A., Denfield, George H., Walker, Edgar Y., et al.. "Deep convolutional models improve predictions of macaque V1 responses to natural images." <i>PLoS Computational Biology,</i> 15, no. 4 (2019) Public Library of Science: https://doi.org/10.1371/journal.pcbi.1006897.en_US
dc.identifier.digitaldocument-1en_US
dc.identifier.doihttps://doi.org/10.1371/journal.pcbi.1006897en_US
dc.identifier.urihttps://hdl.handle.net/1911/111877en_US
dc.language.isoengen_US
dc.publisherPublic Library of Scienceen_US
dc.rightsThis is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.titleDeep convolutional models improve predictions of macaque V1 responses to natural imagesen_US
dc.typeJournal articleen_US
dc.type.dcmiTexten_US
dc.type.publicationpublisher versionen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
document-1.pdf
Size:
3.36 MB
Format:
Adobe Portable Document Format