Browsing by Author "Wen, Zaiwen"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item A Curvilinear Search Method for p-Harmonic Flows on Spheres(2008-01) Goldfarb, Donald; Wen, Zaiwen; Yin, WotaoThe problem of finding p-harmonic flows arises in a wide range of applications including micromagnetics, liquid crystal theory, directional diffusion, and chromaticity denoising. In this paper, we propose an innovative curvilinear search method for minimizing p-harmonic energies over spheres. Starting from a flow (map) on the unit sphere, our method searches along a curve that lies on the sphere in a manner similar to a standard inexact line search descent method. We show that our method is globally convergent if the step length satisfies the Armijo-Wolfe conditions. Computational tests are presented to demonstrate the efficiency of the proposed method and a variant of it that uses Barzilai-Borwein steps.Item A Feasible Method for Optimization with Orthogonality Constraints(2010-11) Wen, Zaiwen; Yin, WotaoMinimization with orthogonality constraints (e.g., X'X = I) and/or spherical constraints (e.g., ||x||_2 = 1) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we propose to use a Crank-Nicholson-like update scheme to preserve the constraints and based on it, develop curvilinear search algorithms with lower per-iteration cost compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842% to the best known solution on the largest problem "256c" in QAPLIB can be reached in 5 minutes on a typical laptop.Item Accelerating Convergence by Augmented Rayleigh-Ritz Projections For Large-Scale Eigenpair Computation(2016-01) Wen, Zaiwen; Zhang, YinIterative algorithms for large-scale eigenpair computation are mostly based subspace projections consisting of two main steps: a subspace update (SU) step that generates bases for approximate eigenspaces, followed by a Rayleigh-Ritz (RR) projection step that extracts approximate eigenpairs. A predominant methodology for the SU step makes use of Krylov subspaces that builds orthonormal bases piece by piece in a sequential manner. On the other hand, block methods such as the classic (simultaneous) subspace iteration, allow higher levels of concurrency than what is reachable by Krylov subspace methods, but may suffer from slow convergence. In this work, we analyze the rate of convergence for a simple block algorithmic framework that combines an augmented Rayleigh-Ritz (ARR) procedure with the subspace iteration. Our main results are Theorem 4.5 and its corollaries which show that the ARR procedure can provide significant accelerations to convergence speed. Our analysis will offer useful guidelines for designing and implementing practical algorithms from this framework.Item Alternating Direction Augmented Lagrangian Methods for Semidefinite Programming(2009-12) Wen, Zaiwen; Goldfarb, Donald; Yin, WotaoWe present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a two-splitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables and finally the primal variables, while in each minimization keeping the other variables fixed. Convergence is proved by using a fixed-point argument. A multiple-splitting algorithm is then proposed to handle SDPs with inequality constraints and positivity constraints directly without transforming them to the equality constraints in standard form. Finally, numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems are presented to demonstrate the robustness and efficiency of our algorithm.Item An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors(2011-01) Xu, Yangyang; Yin, Wotao; Wen, Zaiwen; Zhang, YinThis paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to nd nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a non-convex constrained least-squares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.Item An Efficient Gauss-Newton Algorithm for Symmetric Low-Rank Product Matrix Approximatins(2014-05) Liu, Xin; Wen, Zaiwen; Zhang,YinWe derive and study a Gauss-Newton method for computing the symmetric low-rank product (SLRP) XXT, where X / Rnkfor kItem Augmented Lagrangian Alternating Direction Method for Matrix Separation Based on Low-Rank Factorization(2011-01) Shen, Yuan; Wen, Zaiwen; Zhang, YinThe matrix separation problem aims to separate a low-rank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclear-norm minimization models have been proposed for matrix separation and proved to yield exact separations under suitable conditions. These models, however, typically require the calculation of a full or partial singular value decomposition (SVD) at every iteration that can become increasingly costly as matrix dimensions and rank grow. To improve scalability, in this paper we propose and investigate an alternative approach based on solving a non-convex, low-rank factorization model by an augmented Lagrangian alternating direction method. Numerical studies indicate that the effectiveness of the proposed model is limited to problems where the sparse matrix does not dominate the low-rank one in magnitude, though this limitation can be alleviated by certain data pre-processing techniques. On the other hand, extensive numerical results show that, within its applicability range, the proposed method in general has a much faster solution speed than nuclear-norm minimization algorithms, and often provides better recoverability.Item Block Algorithms with Augmented Rayleigh-Ritz Projections for Large-Scale Eigenpair Computation(2015-06) Wen, Zaiwen; Zhang, YinMost iterative algorithms for eigenpair computation consist of two main steps: a subspace update (SU) step that generates bases for approximate eigenspaces, followed by a Rayleigh-Ritz (RR) projection step that extracts approximate eigenpairs. So far the predominant methodology for the SU step is based on Krylov subspaces that builds orthonormal bases piece by piece in a sequential manner. In this work, we investigate block methods in the SU step that allow a higher level of concurrency than what is reachable by Krylov subspace methods. To achieve a competitive speed, we propose an augmented Rayleigh-Ritz (ARR) procedure and analyze its rate of convergence under realistic conditions. Combining this ARR procedure with a set of polynomial accelerators, as well as utilizing a few other techniques such as continuation and deflation, we construct a block algorithm designed to reduce the number of RR steps and elevate concurrency in the SU steps. Extensive computational experiments are conducted in Matlab on a representative set of test problems to evaluate the performance of two variants of our algorithm in comparison to two well-established, high-quality eigensolvers ARPACK and FEAST. Numerical results, obtained on a many-core computer without explicit code parallelization, show that when computing a relatively large number of eigenpairs, the performance of our algorithms is competitive with, and frequently superior to, that of the two state-of-the-art eigensolvers.Item Decentralized Jointly Sparse Optimization by Reweighted Lq Minimization(2012-02) Ling, Qing; Wen, Zaiwen; Yin, WotaoA set of vectors (or signals) are jointly sparse if their nonzero entries are commonly supported on a small subset of locations. Consider a network of agents which collaborative recover a set of joint sparse vectors. This paper proposes novel decentralized algorithms to recover these vectors in a way that every agent runs a recovery algorithm while neighbor agents exchange only their estimates of the joint support but not their data. The agents will obtain their solutions by taking advantages of the joint sparse structure while keeping their data private. As such, the proposed approach finds applications in distributed (compressive) sensing, decentralized event detection, as well as collaborative data mining. We use a non-convex minimization model and propose algorithms that alternate between support estimate consensus and signal estimate update. The latter step is based on reweighted Lq iterations, where q can be 1 or 2. We numerically compare the proposed decentralized algorithms with existing centralized and decentralized algorithms. Simulation results demonstrate that the proposed decentralized approaches have strong recovery performance and converge reasonably fast.Item Dynamic Compressive Spectrum Sensing for Cognitive Radio Networks(2011-01) Yin, Wotao; Wen, Zaiwen; Li, Shuyi; Meng, Jia (Jasmine); Han, ZhuIn the recently proposed collaborative compressive sensing, the cognitive radios (CRs) sense the occupied spectrum channels by measuring linear combinations of channel powers, instead of sweeping a set of channels sequentially. The measurements are reported to the fusion center, where the occupied channels are recovered by compressive sensing algorithms. In this paper, we study a method of dynamic compressive sensing, which continuously measures channel powers and recovers the occupied channels in a dynamic environment. While standard compressive sensing algorithms must recover multiple occupied channels, a dynamic algorithm only needs to recover the recent change, which is either a newly occupied channel or a released one. On the other hand, the dynamic algorithm must recover the change just in time. Therefore, we propose a least-squared based algorithm, which is equivalent to l0 minimization. We demonstrate its fast speed and robustness to noise. Simulation results demonstrate effectiveness of the proposed scheme.Item An Efficient Gauss--Newton Algorithm for Symmetric Low-Rank Product Matrix Approximations(Society for Industrial and Applied Mathematics, 2015) Liu, Xin; Wen, Zaiwen; Zhang, YinWe derive and study a Gauss--Newton method for computing a symmetric low-rank product $XX^{{T}}$, where $X \in{\mathbb{R}}^{n\times k}$ for $kItem Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions(2012-03) Liu, Xin; Wen, Zaiwen; Zhang, YinIn many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value decompositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms.Item Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions(SIAM, 2013) Liu, Xin; Wen, Zaiwen; Zhang, YinIn many data-intensive applications, the use of principal component analysis and other related techniques is ubiquitous for dimension reduction, data mining, or other transformational purposes. Such transformations often require efficiently, reliably, and accurately computing dominant singular value decompositions (SVDs) of large and dense matrices. In this paper, we propose and study a subspace optimization technique for significantly accelerating the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured dense matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms.Item Solving a Low-Rank Factorization Model for Matrix Completion by a Non-linear Successive Over-Relaxation Algorithm(2010-03) Wen, Zaiwen; Yin, Wotao; Zhang, YinThe matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions -- a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than nuclear-norm minimization algorithms.Item Trace-Penalty Minimization for Large-scale Eigenspace Computation(2013-02) Wen, Zaiwen; Yang, Chao; Liu, Xin; Zhang, YinThe Rayleigh-Ritz (RR) procedure, including orthogonalization, constitutes a major bottleneck in computing relatively high-dimensional eigenspaces of large sparse matrices. Although operations involved in RR steps can be parallelized to an extent, their parallel scalability, limited by some inherent sequentiality, is lower than dense matrix-matrix multiplications. The primary motivation of this paper is to develop a methodology that reduces the use of the RR procedure in exchange for matrix-matrix multiplications. We propose an unconstrained penalty model and establish its equivalence to the eigenvalue problem. This model enables us to deploy gradient-type algorithms heavily dominated by dense matrixmatrix multiplications. Although the proposed algorithm does not necessarily reduce the total number of arithmetic operations, it leverages highly optimized operations on modern high performance computers to achieve parallel scalability. Numerical results based on a preliminary implementation, parallelized using OpenMP, show that our approach is promising.Item Trust, But Verify: Fast and Accurate Signal Recovery from 1-bit Compressive Measurements(2010-11) Laska, Jason N.; Wen, Zaiwen; Yin, Wotao; Baraniuk, Richard G.The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical Shannon-Nyquist rate. To date, the CS theory has assumed primarily real-valued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analog-to-digital converter; moreover, it ensures robustness to gross non-linearities applied to the measurements. In this paper we introduce a new algorithm --restricted-step shrinkage (RSS) -- to recover sparse signals from 1-bit CS measurements. In contrast to previous algorithms for 1-bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio. RSS is similar in spirit to trust-region methods for non-convex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest.