A flexible block coordinate descent method for unconstrained optimization under Hölder continuity

In this work, we propose a flexible block coordinate method for unconstrained optimization problems under Hölder continuity assumptions. The method guarantees convergence to stationary points and has worst-case complexity results comparable to those obtained by single-block methods that assume Lipschitz or Hölder continuity. The approach is based on quadratic models of the objective function combined … Read more

Adaptive Newton-CG methods with global and local analysis for unconstrained optimization with Hölder continuous Hessian

In this paper, we study Newton-conjugate gradient (Newton-CG) methods for minimizing a nonconvex function $f$ whose Hessian is $(H_f,\nu)$-H\”older continuous with modulus $H_f>0$ and exponent \(\nu\in(0,1]\). Recently proposed Newton-CG methods for this problem \cite{he2025newton} adopt (i) non-adaptive regularization and (ii) a nested line-search procedure, where (i) often leads to inefficient early progress and the loss … Read more

Separable QCQPs and Their Exact SDP Relaxations

This paper studies exact semidefinite programming relaxations (SDPRs) for separable quadratically constrained quadratic programs (QCQPs). We consider the construction of a larger separable QCQP from multiple QCQPs with exact SDPRs. We show that exactness is preserved when such QCQPs are combined through a separable horizontal connection, where the coupling is induced through the right-hand-side parameters … Read more

A Gradient Sampling Algorithm for Noisy Nonsmooth Optimization

An algorithm is proposed, analyzed, and tested for minimizing locally Lipschitz objective functions that may be nonconvex and/or nonsmooth. The algorithm, which is built upon the gradient-sampling methodology, is designed specifically for cases when objective function and generalized gradient values might be subject to bounded uncontrollable errors. Similarly to state-of-the-art guarantees for noisy smooth optimization … Read more

An objective-function-free algorithm for nonconvex stochastic optimization with deterministic equality and inequality constraints

An algorithm is proposed for solving optimization problems with stochastic objective and deterministic equality and inequality constraints. This algorithm is objective-function-free in the sense that it only uses the objective’s gradient and never evaluates the function value. It is based on an adaptive selection of function-decreasing and constraint-improving iterations, the first ones using an Adagrad-type … Read more

Beyond binarity: Semidefinite programming for ternary quadratic problems

We study the ternary quadratic problem (TQP), a quadratic optimization problem with linear constraints where the variables take values in {0,±1}. While semidefinite programming (SDP) techniques are well established for {0,1}- and {±1}-valued quadratic problems, no dedicated integer semidefinite programming framework exists for the ternary case. In this paper, we introduce a ternary SDP formulation … Read more

On lifting strategies for optimal control problems

The representation of a function in a higher-dimensional space, often referred to as lifting, can be used to reduce complexity. We investigate how lifting affects the convergence properties of Newton-type methods. For the first time, we conduct a systematic comparison of several lifting strategies on a set of 40 optimal control problems. In addition, we … Read more

Zeroth-Order Methods for Nonconvex-Strongly Concave Stochastic Minimax Problems with Decision-Dependent Distributions

Stochastic minimax problems with decision-dependent distributions (SMDD) have emerged as a crucial framework for modeling complex systems where data distributions drift in response to decision variables. Most existing methods for SMDD rely on an explicit functional relationship between the decision variables and the probability distribution. In this paper, we propose two sample-based zeroth-order algorithms, namely … Read more

Bregman Regularized Proximal Point Methods for Computing Projected Solutions of Quasi-equilibrium Problems

In this paper, we propose two Bregman regularized proximal point methods that provide flexibility to compute projected solutions for quasi-equilibrium problems. Each method has one Bregman projection onto the feasible set and the regularized equilibrium problem. Under standard assumptions, we prove that the methods are well-defined and that the sequences they generate converge to a … Read more

Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more