Gradient Methods for Convex Minimization: Better Rates Under Weaker Conditions
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over certain line segments. Specifically, we establish complexities O(R/eps) and O(sqrt(R/eps)) for the ordinary and accelerate gradient methods, respectively, assuming that ∇f is Lipschitz continuous with constant R over the line segment joining x and x-∇f/R for each x in the domain of f. Then we improve them to O((R/ν) log(1/eps)) and O(sqrt(R/ν) log(1/eps)) for function f that also satisfies the secant inequality (∇f(x))T(x-x*)> ≥ ν||x-x*||2 for each x in the domain of f and its projection x* to the minimizer set of f. The secant condition is also shown to be necessary for the geometric decay of solution error. Not only are the relaxed conditions met by more functions, the restrictions give smaller R and larger ν than they are without the restrictions and thus lead to better complexity bounds. We apply these results to sparse optimization and demonstrate a faster algorithm.
Description
Advisor
Degree
Type
Keywords
Citation
Zhang, Hui and Yin, Wotao. "Gradient Methods for Convex Minimization: Better Rates Under Weaker Conditions." (2013) https://hdl.handle.net/1911/102216.