Stochastic Three Points Method with an Inexact Oracle and Its Application to Steady-State Optimization

We consider unconstrained derivative-free optimization problems in which only inexact function evaluations are available. Specifically, we study the setting where the oracle returns function values with partially controllable inexactness, with the error bounded linearly by a user-specified accuracy parameter, but with an unknown proportionality constant. This framework captures optimization problems arising from approximate simulations or … Read more

A unified framework for inexact adaptive stepsizes in the gradient methods, the conjugate gradient methods and the quasi-Newton methods for strictly convex quadratic optimization

The inexact adaptive stepsizes for the conjugate gradient method and  the quasi-Newton method are very rare. The exact stepsizes in the gradient method, the conjugate gradient method and the  quasi-Newton method for strictly convex quadratic optimization have a unified framework, while the unified framework for inexact adaptive stepsizes  in the gradient method, the conjugate gradient … Read more

Accuracy Certificates for Convex Optimization at Accelerated Rates via Primal-Dual Averaging

Many works in convex optimization provide rates for achieving a small primal gap. However, this quantity is typically unavailable in practice. In this work, we show that solving a regularized surrogate with algorithms based on simple primal-dual averaging provides non-asymptotic convergence guarantees for a computable optimality certificate. We first analyze primal and dual methods based … Read more

A semi-smooth Newton method for the nonlinear conic problem with generalized simplicial cones

In this work we develop and analyze a semi-smooth Newton method for the general non- linear conic programming problem. In particular, we study the problem with a generalized simplicial cone, i.e., the image of a symmetric cone under a linear mapping. We generalize Robinson’s normal equations to a conic setting, yielding what we call the … Read more

A unified convergence theory for adaptive first-order methods in the nonconvex case, including AdaNorm, full and diagonal AdaGrad, Shampoo and Muon

A unified framework for first-order optimization algorithms for nonconvex unconstrained optimization is proposed that uses adaptively preconditioned gradients and includes popular methods such as full and diagonal AdaGrad, AdaNorm, as well as adpative variants of Shampoo and Muon. This framework also allows combining heterogeneous geometries across different groups of variables while preserving a unified convergence … Read more

Complexity of an inexact stochastic SQP algorithm for equality constrained optimization

In this paper, we consider nonlinear optimization problems with a stochastic objective function and deterministic equality constraints. We propose an inexact two-stepsize stochastic sequential quadratic programming (SQP) algorithm and analyze its worst-case complexity under mild assumptions. The method utilizes a step decomposition strategy and handles stochastic gradient estimates by assigning different stepsizes to different components … Read more

A flexible block coordinate descent method for unconstrained optimization under Hölder continuity

In this work, we propose a flexible block coordinate method for unconstrained optimization problems under Hölder continuity assumptions. The method guarantees convergence to stationary points and has worst-case complexity results comparable to those obtained by single-block methods that assume Lipschitz or Hölder continuity. The approach is based on quadratic models of the objective function combined … Read more

Adaptive Newton-CG methods with global and local analysis for unconstrained optimization with Hölder continuous Hessian

In this paper, we study Newton-conjugate gradient (Newton-CG) methods for minimizing a nonconvex function $f$ whose Hessian is $(H_f,\nu)$-H\”older continuous with modulus $H_f>0$ and exponent \(\nu\in(0,1]\). Recently proposed Newton-CG methods for this problem \cite{he2025newton} adopt (i) non-adaptive regularization and (ii) a nested line-search procedure, where (i) often leads to inefficient early progress and the loss … Read more

Separable QCQPs and Their Exact SDP Relaxations

This paper studies exact semidefinite programming relaxations (SDPRs) for separable quadratically constrained quadratic programs (QCQPs). We consider the construction of a larger separable QCQP from multiple QCQPs with exact SDPRs. We show that exactness is preserved when such QCQPs are combined through a separable horizontal connection, where the coupling is induced through the right-hand-side parameters … Read more

A Gradient Sampling Algorithm for Noisy Nonsmooth Optimization

An algorithm is proposed, analyzed, and tested for minimizing locally Lipschitz objective functions that may be nonconvex and/or nonsmooth. The algorithm, which is built upon the gradient-sampling methodology, is designed specifically for cases when objective function and generalized gradient values might be subject to bounded uncontrollable errors. Similarly to state-of-the-art guarantees for noisy smooth optimization … Read more