Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

Date
2012-01
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

This paper studies the models of minimizing ||x||1+1/(2α)||x||22 where x is a vector, as well as those of minimizing ||X||∗+1/(2α)||X||F2 where X is a matrix and ||X||∗ and ||X||F are the nuclear and Frobenius norms of X, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing ||x||1 and ||X||∗ under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0, minimizing ||x||1+1/(2α)||x||22 returns (nearly) the same solution as minimizing ||x||1 almost whenever α≥10||x0||∞. The same relation also holds between minimizing ||X||∗+1/(2α)||X||F2 and minimizing ||X||∗ for recovering a (nearly) low-rank matrix X0, if α≥10||X0||2. Furthermore, we show that the linearized Bregman algorithm for minimizing ||x||1+1/(2α)||x||22 subject to Ax=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on A. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.

Description
Advisor
Degree
Type
Technical report
Keywords
Citation

Lai, Ming-Jun and Yin, Wotao. "Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm." (2012) https://hdl.handle.net/1911/102192.

Has part(s)
Forms part of
Published Version
Rights
Link to license
Citable link to this page