Browsing by Author "Mousavi, Ali"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Consistent parameter estimation for LASSO and approximate message passing(Institute of Mathematical Statistics, 2018) Mousavi, Ali; Maleki, Arian; Baraniuk, Richard G.This paper studies the optimal tuning of the regularization parameter in LASSO or the threshold parameters in approximate message passing (AMP). Considering a model in which the design matrix and noise are zero-mean i.i.d. Gaussian, we propose a data-driven approach for estimating the regularization parameter of LASSO and the threshold parameters in AMP. Our estimates are consistent, that is, they converge to their asymptotically optimal values in probability as nn, the number of observations, and pp, the ambient dimension of the sparse vector, grow to infinity, while n/pn/p converges to a fixed number δδ. As a byproduct of our analysis, we will shed light on the asymptotic properties of the solution paths of LASSO and AMP.Item Data-Driven Computational Sensing(2018-04-30) Mousavi, Ali; Baraniuk, Richard GGreat progress has been made on sensing, perception, and signal processing over the last decades through the design of algorithms matched to the underlying physics and statistics of the task at hand. However, a host of difficult problems remain where the physics-based approach comes up short; for example, unrealistic image models stunt the performance of MRI and other computational imaging systems. Fortunately, the big data age has enabled the development of new kinds of machine learning algorithms that augment our understanding of the physics with models learned from large amounts of training data. In this thesis, we will overview three increasingly integrated physics+data algorithms for solving the kinds of inverse problems encountered in computational sensing. At the lowest level, data can be used to automatically tune the parameters of an optimization algorithm; improving its inferential and computational performance. At the next level, data can be used to learn a more realistic signal model that boosts the performance of an iterative recovery algorithm. At the highest level, data can be used to train a deep network to encapsulate the complete underlying physics of the sensing problem (i.e., not just the signal model but also the forward model that maps signals into measurements). We have shown that moving up the physics+data hierarchy increasingly exploits training data and boosts performance accordingly.Item Signal recovery via deep convolutional networks(2021-04-20) Baraniuk, Richard G.; Mousavi, Ali; Rice University; United States Patent and Trademark OfficeReal-world data may not be sparse in a fixed basis, and current high-performance recovery algorithms are slow to converge, which limits compressive sensing (CS) to either non-real-time applications or scenarios where massive back-end computing is available. Presented herein are embodiments for improving CS by developing a new signal recovery framework that uses a deep convolutional neural network (CNN) to learn the inverse transformation from measurement signals. When trained on a set of representative images, the network learns both a representation for the signals and an inverse map approximating a greedy or convex recovery algorithm. Implementations on real data indicate that some embodiments closely approximate the solution produced by state-of-the-art CS recovery algorithms, yet are hundreds of times faster in run time.Item Topics on LASSO and Approximate Message Passing(2014-04-25) Mousavi, Ali; Baraniuk, Richard G.; Veeraraghavan, Ashok; Zhang, YinThis thesis studies the performance of the LASSO (also known as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements. We consider the recovery of the signal $$x_o \in \mathbb{R}^N$$ from $$n$$ random and noisy linear observations $$y= Ax_o + w$$, where $$A$$ is the measurement matrix and $$w$$ is the noise. The LASSO estimate is given by the solution to the optimization problem $$x_o$$ with $$\hat{x}_{\lambda} = \arg \min_x \frac{1}{2} \|y-Ax\|_2^2 + \lambda \|x\|_1$$. Despite major progress in the theoretical analysis of the LASSO solution, little is known about its behavior as a function of the regularization parameter $$\lambda$$. In this thesis we study two questions in the asymptotic setting (i.e., where $$N \rightarrow \infty$$, $$n \rightarrow \infty$$ while the ratio $$n/N$$ converges to a fixed number in $$(0,1)$$): (i) How does the size of the active set $$\|\hat{x}_\lambda\|_0/N$$ behave as a function of $$\lambda$$, and (ii) How does the mean square error $$\|\hat{x}_{\lambda} - x_o\|_2^2/N$$ behave as a function of $$\lambda$$? We then employ these results in a new, reliable algorithm for solving LASSO based on approximate message passing (AMP). Furthermore, we propose a parameter-free approximate message passing (AMP) algorithm that sets the threshold parameter at each iteration in a fully automatic way without either having an information about the signal to be reconstructed or needing any tuning from the user. We show that the proposed method attains the minimum reconstruction error in the least number of iterations . Our method is based on applying the Stein unbiased risk estimate (SURE) along with a modified gradient descent to find the optimal threshold in each iteration. Motivated by the connections between AMP and LASSO, it could be employed to find the solution of the LASSO for the optimal regularization parameter. To the best of our knowledge, this is the first work concerning parameter tuning that obtains the smallest MSE in the least number of iterations with theoretical guarantees.