An Accelerated Minimal Gradient Method with Momentum for Convex Quadratic Optimization

In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well–known Nesterov’s accelerated gradient method. At each iteration of our scheme, the new point is computed by performing a line–search scheme using a search direction given by a linear … Read more

The Impact of Noise on Evaluation Complexity: The Deterministic Trust-Region Case

Intrinsic noise in objective function and derivatives evaluations may cause premature termination of optimization algorithms. Evaluation complexity bounds taking this situation into account are presented in the framework of a deterministic trust-region method. The results show that the presence of intrinsic noise may dominate these bounds, in contrast with what is known for methods in … Read more

Hölder Gradient Descent and Adaptive Regularization Methods in Banach Spaces for First-Order Points

This paper considers optimization of smooth nonconvex functionals in smooth infinite dimensional spaces. A Hölder gradient descent algorithm is first proposed for finding approximate first-order points of regularized polynomial functionals. This method is then applied to analyze the evaluation complexity of an adaptive regularization method which searches for approximate first-order points of functionals with $\beta$-H\”older … Read more

Scalable adaptive cubic regularization methods

Adaptive cubic regularization (ARC) methods for unconstrained optimization compute steps from linear systems involving a shifted Hessian in the spirit of the Levenberg-Marquardt and trust-region methods. The standard approach consists in performing an iterative search for the shift akin to solving the secular equation in trust-region methods. Such search requires computing the Cholesky factorization of … Read more

Further developments of methods for traversing regions of non-convexity in optimization problems

This paper continues to address one of its author’s obsession with the well- known problem of dealing with non-convexity during the minimization of a nonlinear function f(x) by Newton-like methods. It builds on some proposals made by the present authors in “A Comparison of methods for traversing regions of non-convexity in optimization problems”. (Numerical Algorithms … Read more

The Landscape of the Proximal Point Method for Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learning with applications in generative adversarial networks, robust optimization, reinforcement learning, etc. These applications are often nonconvex-nonconcave, but the existing theory is unable to identify and deal with the fundamental difficulties posed by nonconvex-nonconcave structures. In this paper, we study the classic proximal point method … Read more

Strong Evaluation Complexity of An Inexact Trust-Region Algorithm for Arbitrary-Order Unconstrained Nonconvex Optimization

A trust-region algorithm using inexact function and derivatives values is introduced for solving unconstrained smooth optimization problems. This algorithm uses high-order Taylor models and allows the search of strong approximate minimizers of arbitrary order. The evaluation complexity of finding a $q$-th approximate minimizer using this algorithm is then shown, under standard conditions, to be $\mathcal{O}\big(\min_{j\in\{1,\ldots,q\}}\epsilon_j^{-(q+1)}\big)$ … Read more

A Noise-Tolerant Quasi-Newton Method for Unconstrained Optimization

This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors. This work is motivated by applications that contain computational noise, employ low-precision arithmetic, or are subject to statistical noise. The classical BFGS and L-BFGS methods can fail in such circumstances because the updating procedure … Read more

A family of optimal weighted conjugate-gradient-type methods for strictly convex quadratic minimization

We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a convex combination of the objective function and its gradient norm. This family includes the classical linear conjugate gradient method and the recently published delayed weighted gradient method as the extreme cases … Read more

An improved randomized algorithm with noise level tuning for large-scale noisy unconstrained DFO problems

In this paper, a new randomized solver (called VRDFON) for noisy unconstrained derivative-free optimization (DFO) problems is discussed. Complexity result in the presence of noise for nonconvex functions is studied. Two effective ingredients of VRDFON are an improved derivative-free line search algorithm with many heuristic enhancements and quadratic models in adaptively determined subspaces. Numerical results … Read more