NeuroView: Explainable Deep Network Decision Making

Date
2022-07-06
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Deep neural networks (DNs) provide superhuman performance in numerous computer vision tasks, yet it remains unclear exactly which of a DN's units contribute to a particular decision. A deep network’s prediction cannot be explained in a formal mathematical manner such that you know how all the parameters contribute to the decision. NeuroView is a new family of DN architectures that are explainable by design. Each member of the family is derived from a standard DN architecture by concatenating all of the activations and feeding them into a global linear classifier. The resulting architecture establishes a direct, causal link between the state of each unit and the classification decision. We validate NeuroView on multiple datasets and classification tasks to show that it has on par performance to a typical DN. Also, we inspect how it’s unit/class mapping aids in understanding the decision-making process. In this thesis, we propose using NeuroView in other architectures such as convolutional and recurrent neural networks to show how it can aid in providing additional understanding in applications that need more explanation.

Description
Degree
Doctor of Philosophy
Type
Thesis
Keywords
deep learning, explainability, interpretability, computer vision
Citation

Barberan, CJ. "NeuroView: Explainable Deep Network Decision Making." (2022) Diss., Rice University. https://hdl.handle.net/1911/113400.

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page