Topics on LASSO and Approximate Message Passing
dc.contributor.advisor | Baraniuk, Richard G. | en_US |
dc.contributor.committeeMember | Veeraraghavan, Ashok | en_US |
dc.contributor.committeeMember | Zhang, Yin | en_US |
dc.creator | Mousavi, Ali | en_US |
dc.date.accessioned | 2014-09-22T19:57:53Z | en_US |
dc.date.available | 2014-09-22T19:57:53Z | en_US |
dc.date.created | 2014-05 | en_US |
dc.date.issued | 2014-04-25 | en_US |
dc.date.submitted | May 2014 | en_US |
dc.date.updated | 2014-09-22T19:57:54Z | en_US |
dc.description.abstract | This thesis studies the performance of the LASSO (also known as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements. We consider the recovery of the signal $$x_o \in \mathbb{R}^N$$ from $$n$$ random and noisy linear observations $$y= Ax_o + w$$, where $$A$$ is the measurement matrix and $$w$$ is the noise. The LASSO estimate is given by the solution to the optimization problem $$x_o$$ with $$\hat{x}_{\lambda} = \arg \min_x \frac{1}{2} \|y-Ax\|_2^2 + \lambda \|x\|_1$$. Despite major progress in the theoretical analysis of the LASSO solution, little is known about its behavior as a function of the regularization parameter $$\lambda$$. In this thesis we study two questions in the asymptotic setting (i.e., where $$N \rightarrow \infty$$, $$n \rightarrow \infty$$ while the ratio $$n/N$$ converges to a fixed number in $$(0,1)$$): (i) How does the size of the active set $$\|\hat{x}_\lambda\|_0/N$$ behave as a function of $$\lambda$$, and (ii) How does the mean square error $$\|\hat{x}_{\lambda} - x_o\|_2^2/N$$ behave as a function of $$\lambda$$? We then employ these results in a new, reliable algorithm for solving LASSO based on approximate message passing (AMP). Furthermore, we propose a parameter-free approximate message passing (AMP) algorithm that sets the threshold parameter at each iteration in a fully automatic way without either having an information about the signal to be reconstructed or needing any tuning from the user. We show that the proposed method attains the minimum reconstruction error in the least number of iterations . Our method is based on applying the Stein unbiased risk estimate (SURE) along with a modified gradient descent to find the optimal threshold in each iteration. Motivated by the connections between AMP and LASSO, it could be employed to find the solution of the LASSO for the optimal regularization parameter. To the best of our knowledge, this is the first work concerning parameter tuning that obtains the smallest MSE in the least number of iterations with theoretical guarantees. | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Mousavi, Ali. "Topics on LASSO and Approximate Message Passing." (2014) Master’s Thesis, Rice University. <a href="https://hdl.handle.net/1911/77216">https://hdl.handle.net/1911/77216</a>. | en_US |
dc.identifier.uri | https://hdl.handle.net/1911/77216 | en_US |
dc.language.iso | eng | en_US |
dc.rights | Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder. | en_US |
dc.subject | LASSO | en_US |
dc.subject | Sparsity | en_US |
dc.subject | Message passing | en_US |
dc.title | Topics on LASSO and Approximate Message Passing | en_US |
dc.type | Thesis | en_US |
dc.type.material | Text | en_US |
thesis.degree.department | Electrical and Computer Engineering | en_US |
thesis.degree.discipline | Engineering | en_US |
thesis.degree.grantor | Rice University | en_US |
thesis.degree.level | Masters | en_US |
thesis.degree.name | Master of Science | en_US |
Files
Original bundle
1 - 1 of 1