TRFD: A derivative-free trust-region method based on finite differences for composite nonsmooth optimization

\(\) In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward … Read more

Worst-case evaluation complexity of a derivative-free quadratic regularization method

This short paper presents a derivative-free quadratic regularization method for unconstrained minimization of a smooth function with Lipschitz continuous gradient. At each iteration, trial points are computed by minimizing a quadratic regularization of a local model of the objective function. The models are based on forward finite-difference gradient approximations. By using a suitable acceptance condition … Read more

An Adaptive Riemannian Gradient Method Without Function Evaluations

In this paper we propose an adaptive gradient method for optimization on Riemannian manifolds. The update rule for the stepsizes relies only on gradient evaluations. Assuming that the objective function is bounded from below and that its gradient field is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations that the … Read more

Adaptive Third-Order Methods for Composite Convex Optimization

In this paper we propose third-order methods for composite convex optimization problems in which the smooth part is a three-times continuously differentiable function with Lipschitz continuous third-order derivatives. The methods are adaptive in the sense that they do not require the knowledge of the Lipschitz constant. Trial points are computed by the inexact minimization of … Read more

An Adaptive Trust-Region Method Without Function Evaluations

In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate … Read more

Quadratic Regularization Methods with Finite-Difference Gradient Approximations

This paper presents two quadratic regularization methods with finite-difference gradient approximations for smooth unconstrained optimization problems. One method is based on forward finite-difference gradients, while the other is based on central finite-difference gradients. In both methods, the accuracy of the gradient approximations and the regularization parameter in the quadratic models are jointly adjusted using a … Read more

A Cubic Regularization of Newton’s Method with Finite-Difference Hessian Approximations

In this paper, we present a version of the Cubic Regularization of Newton’s method for unconstrained nonconvex optimization, in which the Hessian matrices are approximated by forward finite difference Hessians. The regularization parameter of the cubic models and the accuracy of the Hessian approximations are jointly adjusted using a nonmonotone line-search criterion. Assuming that the … Read more

Worst-case evaluation complexity of derivative-free nonmonotone line search methods for solving nonlinear systems of equations

In this paper we study a class of derivative-free nonmonotone line search methods for solving nonlinear systems of equations, which includes the method N-DF-SANE proposed in (IMA J. Numer. Anal. 29: 814–825, 2009). These methods correspond to derivative-free optimization methods applied to the minimization of a suitable merit function. Assuming that the mapping defining the … Read more

A Generalized Worst-Case Complexity Analysis for Non-Monotone Line Searches

We study the worst-case complexity of a non-monotone line search framework that covers a wide variety of known techniques published in the literature. In this framework, the non-monotonicity is controlled by a sequence of nonnegative parameters. We obtain complexity bounds to achieve approximate first-order optimality even when this sequence is not summable. Article Download View … Read more

On Inexact Solution of Auxiliary Problems in Tensor Methods for Convex Optimization

In this paper we study the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with \nu-Holder continuous pth derivatives. This type of auxiliary problems corresponds to the minimization of a (p+\nu)-order regularization of the pth order Taylor approximation of the objective. For the case p=3, we consider the use … Read more