A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

On Relatively Smooth Optimization over Riemannian Manifolds

We study optimization over Riemannian embedded submanifolds, where the objective function is relatively smooth in the ambient Euclidean space. Such problems have broad applications but are still largely unexplored. We introduce two Riemannian first-order methods, namely the retraction-based and projection-based Riemannian Bregman gradient methods, by incorporating the Bregman distance into the update steps. The retraction-based … Read more

Primal-dual global convergence of an augmented Lagrangian method under the error bound condition

This work investigates global convergence properties of a safeguarded augmented Lagrangian method applied to nonlinear programming problems, with an emphasis on the role of constraint qualifications in ensuring boundedness of the Lagrange multiplier estimates, also known as dual sequences. When functions with locally Lipschitz continuous derivatives define the constraint set, we prove that the Error … Read more

On the Convergence and Complexity of Proximal Gradient and Accelerated Proximal Gradient Methods under Adaptive Gradient Estimation

In this paper, we propose a proximal gradient method and an accelerated proximal gradient method for solving composite optimization problems, where the objective function is the sum of a smooth and a convex, possibly nonsmooth, function. We consider settings where the smooth component is either a finite-sum function or an expectation of a stochastic function, … Read more

Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more

Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees

In this paper, we develop and analyze sub-sampled trust-region methods for solving finite-sum optimization problems. These methods employ subsampling strategies to approximate the gradient and Hessian of the objective function, significantly reducing the overall computational cost. We propose a novel adaptive procedure for deterministically adjusting the sample size used for gradient (or gradient and Hessian) … Read more

Recursive Bound-Constrained AdaGrad with Applications to Multilevel and Domain Decomposition Minimization

Two OFFO (Objective-Function Free Optimization) noise tolerant algorithms are presented that handle bound constraints, inexact gradients and use second-order information when available. The first is a multi-level method exploiting a hierarchical description of the problem and the second is a domain-decomposition method covering the standard addditive Schwarz decompositions. Both are generalizations of the first-order AdaGrad … Read more

A first-order method for nonconvex-nonconcave minimax problems under a local Kurdyka-Łojasiewicz condition

We study a class of nonconvex–nonconcave minimax problems in which the inner maximization problem satisfies a local Kurdyka–Łojasiewicz (KL) condition that may vary with the outer minimization variable. In contrast to the global KL or Polyak–Łojasiewicz (PL) conditions commonly assumed in the literature—which are significantly stronger and often too restrictive in practice—this local KL condition … Read more

General Perturbation Resilient Dynamic String-Averaging for Inconsistent Problems with Superiorization

In this paper we introduce a General Dynamic String-Averaging (GDSA) iterative scheme and investigate its convergence properties in the inconsistent case, that is, when the input operators don’t have a common fixed point. The Dynamic String-Averaging Projection (DSAP) algorithm itself was introduced in an 2013 paper, where its strong convergence and bounded perturbation resilience were … Read more