An Adaptive Riemannian Gradient Method Without Function Evaluations

In this paper we propose an adaptive gradient method for optimization on Riemannian manifolds. The update rule for the stepsizes relies only on gradient evaluations. Assuming that the objective function is bounded from below and that its gradient field is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations that the … Read more

Adaptive Third-Order Methods for Composite Convex Optimization

In this paper we propose third-order methods for composite convex optimization problems in which the smooth part is a three-times continuously differentiable function with Lipschitz continuous third-order derivatives. The methods are adaptive in the sense that they do not require the knowledge of the Lipschitz constant. Trial points are computed by the inexact minimization of … Read more

An Adaptive Trust-Region Method Without Function Evaluations

In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate … Read more

Quadratic Regularization Methods with Finite-Difference Gradient Approximations

This paper presents two quadratic regularization methods with finite-difference gradient approximations for smooth unconstrained optimization problems. One method is based on forward finite-difference gradients, while the other is based on central finite-difference gradients. In both methods, the accuracy of the gradient approximations and the regularization parameter in the quadratic models are jointly adjusted using a … Read more

A Cubic Regularization of Newton’s Method with Finite-Difference Hessian Approximations

In this paper, we present a version of the Cubic Regularization of Newton’s method for unconstrained nonconvex optimization, in which the Hessian matrices are approximated by forward finite difference Hessians. The regularization parameter of the cubic models and the accuracy of the Hessian approximations are jointly adjusted using a nonmonotone line-search criterion. Assuming that the … Read more

Worst-case evaluation complexity of derivative-free nonmonotone line search methods for solving nonlinear systems of equations

In this paper we study a class of derivative-free nonmonotone line search methods for solving nonlinear systems of equations, which includes the method N-DF-SANE proposed in (IMA J. Numer. Anal. 29: 814–825, 2009). These methods correspond to derivative-free optimization methods applied to the minimization of a suitable merit function. Assuming that the mapping defining the … Read more

A Generalized Worst-Case Complexity Analysis for Non-Monotone Line Searches

We study the worst-case complexity of a non-monotone line search framework that covers a wide variety of known techniques published in the literature. In this framework, the non-monotonicity is controlled by a sequence of nonnegative parameters. We obtain complexity bounds to achieve approximate first-order optimality even when this sequence is not summable. ArticleDownload View PDF

On Inexact Solution of Auxiliary Problems in Tensor Methods for Convex Optimization

In this paper we study the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with \nu-Holder continuous pth derivatives. This type of auxiliary problems corresponds to the minimization of a (p+\nu)-order regularization of the pth order Taylor approximation of the objective. For the case p=3, we consider the use … Read more

Tensor Methods for Finding Approximate Stationary Points of Convex Functions

In this paper we consider the problem of finding \epsilon-approximate stationary points of convex functions that are p-times differentiable with \nu-Hölder continuous pth derivatives. We present tensor methods with and without acceleration. Specifically, we show that the non-accelerated schemes take at most O(\epsilon^{-1/(p+\nu-1)}) iterations to reduce the norm of the gradient of the objective below … Read more

On the Complexity of an Augmented Lagrangian Method for Nonconvex Optimization

In this paper we study the worst-case complexity of an inexact Augmented Lagrangian method for nonconvex constrained problems. Assuming that the penalty parameters are bounded, we prove a complexity bound of $\mathcal{O}(|\log(\epsilon)|)$ outer iterations for the referred algorithm to generate an $\epsilon$-approximate KKT point, for $\epsilon\in (0,1)$. When the penalty parameters are unbounded, we prove … Read more