Parallel Sparse Optimization
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis proposes parallel and distributed algorithms for solving very largescale sparse optimization problems on computer clusters and clouds. Many modern applications problems from compressive sensing, machine learning and signal and image processing involve large-scale data and can be modeled as sparse optimization problems. Those problems are in such a large-scale that they can no longer be processed on single workstations running single-threaded computing approaches. Moving to parallel/distributed/cloud computing becomes a viable option. I propose two approaches for solving these problems. The first approach is the distributed implementations of a class of efficient proximal linear methods for solving convex optimization problems by taking advantages of the separability of the terms in the objective. The second approach is a parallel greedy coordinate descent method (GRock), which greedily choose several entries to update in parallel in each iteration. I establish the convergence of GRock and explain why it often performs exceptionally well for sparse optimization. Extensive numerical results on a computer cluster and Amazon EC2 demonstrate the efficiency and elasticity of my algorithms.
Description
Advisor
Degree
Type
Keywords
Citation
Peng, Zhimin. "Parallel Sparse Optimization." (2013) Master’s Thesis, Rice University. https://hdl.handle.net/1911/77447.