An adaptive line-search-free multiobjective gradient method and its iteration-complexity analysis

This work introduces an Adaptive Line-Search-Free Multiobjective Gradient (AMG) method for solving smooth multiobjective optimization problems. The proposed approach automatically adjusts stepsizes based on steepest descent directions, promoting robustness with respect to stepsize choice while maintaining low computational cost. The method is specifically tailored to the multiobjective setting and does not rely on function evaluations, … Read more

Riemannian optimization with finite-difference gradient approximations

Derivative-free Riemannian optimization (DFRO) aims to minimize an objective function using only function evaluations, under the constraint that the decision variables lie on a Riemannian manifold. The rapid increase in problem dimensions over the years calls for computationally cheap DFRO algorithms, that is, algorithms requiring as few function evaluations and retractions as possible. We propose … Read more

Complexity of quadratic penalty methods with adaptive accuracy under a PL condition for the constraints

We study the quadratic penalty method (QPM) for smooth nonconvex optimization problems with equality constraints. Assuming the constraint violation satisfies the PL condition near the feasible set, we derive sharper worst-case complexity bounds for obtaining approximate first-order KKT points. When the objective and constraints are twice continuously differentiable, we show that QPM equipped with a … Read more

A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization

We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-2}\right)\) function evaluations for the method to reach an \(\left(\frac{L}{\sigma}\epsilon\right)\)-approximate stationary point, where \(n\) is the … Read more

On the Complexity of Lower-Order Implementations of Higher-Order Methods

In this work, we propose a method for minimizing non-convex functions with Lipschitz continuous \(p\)th-order derivatives, starting from \(p \geq 1\). The method, however, only requires derivative information up to order \((p-1)\), since the \(p\)th-order derivatives are approximated via finite differences. To ensure oracle efficiency, instead of computing finite-difference approximations at every iteration, we reuse … Read more

A Riemannian AdaGrad-Norm Method

We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more

Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees

In this paper, we develop and analyze sub-sampled trust-region methods for solving finite-sum optimization problems. These methods employ subsampling strategies to approximate the gradient and Hessian of the objective function, significantly reducing the overall computational cost. We propose a novel adaptive procedure for deterministically adjusting the sample size used for gradient (or gradient and Hessian) … Read more

Fully Adaptive Zeroth-Order Method for Minimizing Functions with Compressible Gradients

We propose an adaptive zeroth-order method for minimizing differentiable functions with L-Lipschitz continuous gradients. The method is designed to take advantage of the eventual compressibility of the gradient of the objective function, but it does not require knowledge of the approximate sparsity level s or the Lipschitz constant L of the gradient. We show that … Read more

Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints

In this work we propose a general nonmonotone line-search method for nonconvex multi-objective optimization problems with convex constraints. At the \(k\)th iteration, the degree of nonmonotonicity is controlled by a vector \(\nu_{k}\) with nonnegative components. Different choices for \(\nu_{k}\) lead to different nonmonotone step-size rules. Assuming that the sequence \(\left\{\nu_{k}\right\}_{k\geq 0}\) is summable, and that … Read more