Browsing by Author "Lai, Ming-Jun"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm(2012-01) Lai, Ming-Jun; Yin, WotaoThis paper studies the models of minimizing $||x||_1+1/(2\alpha)||x||_2^2$ where $x$ is a vector, as well as those of minimizing $||X||_*+1/(2\alpha)||X||_F^2$ where $X$ is a matrix and $||X||_*$ and $||X||_F$ are the nuclear and Frobenius norms of $X$, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing $||x||_1$ and $||X||_*$ under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector $x^0$, minimizing $||x||_1+1/(2\alpha)||x||_2^2$ returns (nearly) the same solution as minimizing $||x||_1$ almost whenever $\alpha\ge 10||x^0||_\infty$. The same relation also holds between minimizing $||X||_*+1/(2\alpha)||X||_F^2$ and minimizing $||X||_*$ for recovering a (nearly) low-rank matrix $X^0$, if $\alpha\ge 10||X^0||_2$. Furthermore, we show that the linearized Bregman algorithm for minimizing $||x||_1+1/(2\alpha)||x||_2^2$ subject to $Ax=b$ enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on $A$. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Item Low-Rank Matrix Recovery using Unconstrained Smoothed-Lq Minimization(2011-09) Lai, Ming-Jun; Xu, Yangyang; Yin, WotaoA low-rank matrix can be recovered from a small number of its linear measurements. As a special case, the matrix completion problem aims to recover the matrix from a subset of its entries. Such problems share many common features with those recovering sparse vectors. In this paper, we extend nonconvex Lq minimization and iteratively reweighted algorithms from recovering sparse vectors to recovering low-rank matrices. Unlike most existing work, this work focuses on unconstrained Lq minimization, for which we show a few advantages on noisy measurements and/or approximately low-rank matrices. Based on results in Daubechies-DeVore-Fornasier-Güntürk '2010 for constrained Lq minimization, we start with a preliminary yet novel analysis for unconstrained Lq minimization for sparse vectors, which includes convergence, error bound, and local convergence rates. Then, the algorithm and analysis is extended to the recovery of low-rank matrices. The algorithm has been compared to some existing state-of-the-arts and shows superior performance on recovering low-rank matrices with fast-decaying singular values from incomplete measurements.