Baraniuk, Richard G.2021-05-032021-05-032021-052021-04-28May 2021Balestriero, Randall. "Max-Affine Splines Insights Into Deep Learning." (2021) Diss., Rice University. <a href="https://hdl.handle.net/1911/110439">https://hdl.handle.net/1911/110439</a>.https://hdl.handle.net/1911/110439We build a rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators. Our key result is that a large class of DNs can be written as a composition of max-affine spline operators (MASOs), which provide a powerful portal through which to view and analyze their inner workings. For instance, conditioned on the spline partition region containing the input signal, the output of a MASO DN can be written as a simple affine transformation of the input. Studying the geometry of those regions allows to obtain novel insights into different regularization techniques, different layer configurations or different initialization schemes. Going further, this spline viewpoint allows to obtain precise geometric insights in various domains such as the characterization of the Deep Generative Networks's generated manifold, the understanding of Deep Network pruning as a mean to simplify the DN input space partition or the relationship between different nonlinearities e.g. ReLU-Sigmoid Gated Linear Unit as simply corresponding to different MASO region membership inference algorithms. The spline partition of the input signal space that is implicitly induced by a MASO directly links DNs to the theory of vector quantization (VQ) and $K$-means clustering, which opens up new geometric avenues to study how DNs organize signals in a hierarchical fashion.application/pdfengCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.deep learningdeep networksaffine splinesMax-Affine Splines Insights Into Deep LearningThesis2021-05-03