An Adaptive Riemannian Gradient Method Without Function Evaluations

In this paper we propose an adaptive gradient method for optimization on Riemannian manifolds. The update rule for the stepsizes relies only on gradient evaluations. Assuming that the objective function is bounded from below and that its gradient field is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations that the … Read more

Two efficient gradient methods with approximately optimal stepsizes based on regularization models for unconstrained optimization

It is widely accepted that the stepsize is of great significance to gradient method. Two efficient gradient methods with approximately optimal stepsizes mainly based on regularization models are proposed for unconstrained optimization. More exactly, if the objective function is not close to a quadratic function on the line segment between the current and latest iterates, … Read more

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces

Variable order structures model situations in which the comparison between two points depends on a point-to-cone map. In this paper, an inexact projected gradient method for solving smooth constrained vector optimization problems on variable ordered spaces is presented. It is shown that every accumulation point of the generated sequence satisfies the first order necessary optimality … Read more

The Fastest Known Globally Convergent First-Order Method for Minimizing Strongly Convex Functions

We design and analyze a novel gradient-based algorithm for unconstrained convex optimization. When the objective function is $m$-strongly convex and its gradient is $L$-Lipschitz continuous, the iterates and function values converge linearly to the optimum at rates $\rho$ and $\rho^2$, respectively, where $\rho = 1-\sqrt{m/L}$. These are the fastest known guaranteed linear convergence rates for … Read more

Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data

We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take … Read more

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also extend the result to a noisy … Read more

A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems

We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish … Read more