Browsing by Author "Baraniuk, Richard G."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item New Theory and Methods for Signals in Unions of Subspaces(2014-09-18) Dyer, Eva Lauren; Baraniuk, Richard G.; Koushanfar, Farinaz; Allen, Genevera; Sabharwal, AshutoshThe rapid development and availability of cheap storage and sensing devices has quickly produced a deluge of high-dimensional data. While the dimensionality of modern datasets continues to grow, our saving grace is that these data often exhibit low-dimensional structure that can be exploited to compress, organize, and cluster massive collections of data. Signal models such as linear subspace models, remain one of the most widely used models for high-dimensional data; however, in many settings of interest, finding a global model that can capture all the relevant structure in the data is not possible. Thus, an alternative to learning a global model is to instead learn a hybrid model or a union of low-dimensional subspaces that model different subsets of signals in the dataset as living on distinct subspaces. This thesis develops new methods and theory for learning union of subspace models as well as exploiting multi-subspace structure in a wide range of signal processing and data analysis tasks. The main contributions of this thesis include new methods and theory for: (i) decomposing and subsampling datasets consisting of signals on unions of subspaces, (ii) subspace clustering for learning union of subspace models, and (iii) exploiting multi-subspace structure in order accelerate distributed computing and signal processing on massive collections of data. I demonstrate the utility of the proposed methods in a number of important imaging and computer vision applications including: illumination-invariant face recognition, segmentation of hyperspectral remote sensing data, and compression of video and lightfield data arising in 3D scene modeling and analysis.Item The Recurrent Neural Tangent Kernel(2022-05-09) Alemohammad, Sina; Baraniuk, Richard G.The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network (RNN). In this thesis we introduce and study the Recurrent Neural Tangent Kernel (RNTK), which provides new insights into the behavior of overparametrized RNNs. A key property of the RNTK should greatly benefit practitioners is its ability to compare inputs of different length. To this end, we characterize how the RNTK weights different time steps to form its output under different initialization parameters and nonlinearity choices. A synthetic and 56 real-world data experiments demonstrate that the RNTK offers significant performance gains over other kernels, including standard NTKs, across a wide array of data sets.