Swapping objectives accelerates Davis-Yin splitting

In this work, we investigate the application of Davis–Yin splitting (DYS) to convex optimization problems and demonstrate that swapping the roles of the two nonsmooth convex functions can result in a faster convergence rate. Such a swap typically yields a different sequence of iterates, but its impact on convergence behavior has been largely understudied or … Read more

Optimized methods for composite optimization: a reduction perspective

Recent advances in convex optimization have leveraged computer-assisted proofs to develop optimized first-order methods that improve over classical algorithms. However, each optimized method is specially tailored for a particular problem setting, and it is a well-documented challenge to extend optimized methods to other settings due to their highly bespoke design and analysis. We provide a general … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

Dual certificates of primal cone membership

We discuss optimization problems over convex cones in which membership is difficult to verify directly. In the standard theory of duality, vectors in the dual cone \(K^*\) are associated with separating hyperplanes and interpreted as certificates of non-membership in the primal cone \(K\). Complementing this perspective, we develop easily verifiable certificates of membership in \(K\) … Read more

Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems

The part of the analysis of the convergence rate of the mirror descent method that is connected with the adaptive time-varying step size rules due to Alkousa et al. (MOTOR 2024, pp. 3-18) is corrected. Moreover, a Lipschitz-free mirror descent method that achieves weak ergodic convergence is presented, generalizing the convergence results of the mirror … Read more

Gradient Methods with Online Scaling Part I. Theoretical Foundations

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning … Read more

A symmetric extrapolated proximal alternating predictor-corrector method for saddle-point problems

The proximal alternating predictor-corrector (PAPC) method is a widely used first-order algorithm for solving convex-concave saddle-point problems involving both smooth and nonsmooth components. Unlike the primal-dual hybrid gradient (PDHG) method, which incorporates an extrapolation step with parameter $\theta \in (0,1]$ to improve convergence, the existing convergence analysis of PAPC has been limited to the case … Read more

Multi-cut stochastic approximation methods for solving stochastic convex composite optimization

This paper considers the stochastic convex composite optimization problem and presents multi-cut stochastic approximation (SA) methods for solving it, whose models in expectation overestimate its objective function. The multi-cut model obtained by taking the maximum of a finite number of linearizations of the stochastic objective function provides a biased estimate of the objective function, with … Read more

An Adaptive and Parameter-Free Nesterov’s Accelerated Gradient Method for Convex Optimization

We propose AdaNAG, an adaptive accelerated gradient method based on Nesterov’s accelerated gradient method. AdaNAG is line-search-free, parameter-free, and achieves the accelerated convergence rates \( f(x_k) – f_\star = \mathcal{O}\left(1/k^2\right) \) and \( \min_{i\in\left\{1,\dots, k\right\}} \|\nabla f(x_i)\|^2 = \mathcal{O}\left(1/k^3\right) \) for \( L \)-smooth convex function \( f \). We provide a Lyapunov analysis for … Read more

Steepest descent method using novel adaptive stepsizes for unconstrained nonlinear multiobjective programming

We propose new adaptive strategies to compute stepsizes for the steepest descent method to solve unconstrained nonlinear multiobjective optimization problems without employing any linesearch procedure. The resulting algorithms can be applied to a wide class of nonconvex unconstrained multi-criteria optimization problems satisfying a global Lipschitz continuity condition imposed on the gradients of all objectives. In … Read more