Parallel Sparse Optimization
dc.contributor.advisor | Yin, Wotao | en_US |
dc.contributor.committeeMember | Zhang, Yin | en_US |
dc.contributor.committeeMember | Baraniuk, Richard G. | en_US |
dc.creator | Peng, Zhimin | en_US |
dc.date.accessioned | 2014-10-08T14:57:12Z | en_US |
dc.date.available | 2014-10-08T14:57:12Z | en_US |
dc.date.created | 2013-12 | en_US |
dc.date.issued | 2013-08-27 | en_US |
dc.date.submitted | December 2013 | en_US |
dc.date.updated | 2014-10-08T14:57:12Z | en_US |
dc.description.abstract | This thesis proposes parallel and distributed algorithms for solving very largescale sparse optimization problems on computer clusters and clouds. Many modern applications problems from compressive sensing, machine learning and signal and image processing involve large-scale data and can be modeled as sparse optimization problems. Those problems are in such a large-scale that they can no longer be processed on single workstations running single-threaded computing approaches. Moving to parallel/distributed/cloud computing becomes a viable option. I propose two approaches for solving these problems. The first approach is the distributed implementations of a class of efficient proximal linear methods for solving convex optimization problems by taking advantages of the separability of the terms in the objective. The second approach is a parallel greedy coordinate descent method (GRock), which greedily choose several entries to update in parallel in each iteration. I establish the convergence of GRock and explain why it often performs exceptionally well for sparse optimization. Extensive numerical results on a computer cluster and Amazon EC2 demonstrate the efficiency and elasticity of my algorithms. | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.identifier.citation | Peng, Zhimin. "Parallel Sparse Optimization." (2013) Master’s Thesis, Rice University. <a href="https://hdl.handle.net/1911/77447">https://hdl.handle.net/1911/77447</a>. | en_US |
dc.identifier.uri | https://hdl.handle.net/1911/77447 | en_US |
dc.language.iso | eng | en_US |
dc.rights | Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder. | en_US |
dc.subject | Sparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied MathSparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied MathSparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied MathSparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied MathSparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied MathSparse optimization | en_US |
dc.subject | Parallel computing | en_US |
dc.subject | Distributed computing | en_US |
dc.subject | Prox-linear methods | en_US |
dc.subject | Grock | en_US |
dc.subject | Applied Math | en_US |
dc.title | Parallel Sparse Optimization | en_US |
dc.type | Thesis | en_US |
dc.type.material | Text | en_US |
thesis.degree.department | Computational and Applied Mathematics | en_US |
thesis.degree.discipline | Engineering | en_US |
thesis.degree.grantor | Rice University | en_US |
thesis.degree.level | Masters | en_US |
thesis.degree.name | Master of Arts | en_US |