Browsing by Author "Anselmi, Fabio"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Robust deep learning object recognition models rely on low frequency information in natural images(PLOS, 2023) Li, Zhe; Caro, Josue Ortega; Rusak, Evgenia; Brendel, Wieland; Bethge, Matthias; Anselmi, Fabio; Patel, Ankit B.; Tolias, Andreas S.; Pitkow, XaqMachine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.Item Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks(Frontiers Media S.A., 2022) Karantzas, Nikos; Besier, Emma; Ortega Caro, Josue; Pitkow, Xaq; Tolias, Andreas S.; Patel, Ankit B.; Anselmi, FabioDespite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.