Parametric complexity analysis for a class of first-order Adagrad-like algorithms

A class of algorithms for optimization in the presence of noise is presented, that does not require the evaluation of the objective function. This class generalizes the well-known Adagrad method. The complexity of this class is then analyzed as a function of its parameters, and it is shown that some methods of the class enjoy … Read more

First-Order Objective-Function-Free Optimization Algorithms and Their Complexity

A class of algorithms for unconstrained nonconvex optimization is considered where the value of the objective function is never computed. The class contains a deterministic version of the first-order Adagrad method typically used for minimization of noisy function, but also allows the use of second-order information when available. The rate of convergence of methods in … Read more

Stochastic trust-region and direct-search methods: A weak tail bound condition and reduced sample sizing

Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially non-smooth function whose values can only be estimated via stochastic … Read more

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with $\ell_1$-loss, where the goal is to recover a low-rank matrix from a limited number of measurements, a subset of which may be grossly corrupted with noise. We study a scenario where the … Read more

Two efficient gradient methods with approximately optimal stepsizes based on regularization models for unconstrained optimization

It is widely accepted that the stepsize is of great significance to gradient method. Two efficient gradient methods with approximately optimal stepsizes mainly based on regularization models are proposed for unconstrained optimization. More exactly, if the objective function is not close to a quadratic function on the line segment between the current and latest iterates, … Read more

A minibatch stochastic Quasi-Newton method adapted for nonconvex deep learning problems

In this study, we develop a limited memory nonconvex Quasi-Newton (QN) method, tailored to deep learning (DL) applications. Since the stochastic nature of (sampled) function information in minibatch processing can affect the performance of QN methods, three strategies are utilized to overcome this issue. These involve a novel progressive trust-region radius update (suitable for stochastic … Read more

An adaptive regularization algorithm for unconstrained optimization with inexact function and derivatives values

An adaptive regularization algorithm for unconstrained nonconvex optimization is proposed that is capable of handling inexact objective-function and derivative values, and also of providing approximate minimizer of arbitrary order. In comparison with a similar algorithm proposed in Cartis, Gould, Toint (2022), its distinguishing feature is that it is based on controlling the relative error between … Read more

Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an epsilon-approximate minimizer of arbitrary order q > 0 in at most O(epsilon^{-(q+1)}) inexact evaluations of the function and … Read more

OPM, a collection of Optimization Problems in Matlab

OPM is a small collection of CUTEst unconstrained and bound-constrained nonlinear optimization problems, which can be used in Matlab for testing optimization algorithms directly (i.e. without installing additional software). Article Download View OPM, a collection of Optimization Problems in Matlab

Convergence Analysis of Block Majorize-Minimize Subspace Approaches

Majorization-Minimization (MM) consists of a class of efficient and effective optimization algorithms that benefit from solid theoretical foundations. MM methods have shown their great ability to tackle efficiently challenging optimization problems from signal processing, image processing, inverse problems and machine learning. When processing large amount of data/variable, as it may happen in 3D image processing, … Read more