General Perturbation Resilient Dynamic String-Averaging for Inconsistent Problems with Superiorization

In this paper we introduce a General Dynamic String-Averaging (GDSA) iterative scheme and investigate its convergence properties in the inconsistent case, that is, when the input operators don’t have a common fixed point. The Dynamic String-Averaging Projection (DSAP) algorithm itself was introduced in an 2013 paper, where its strong convergence and bounded perturbation resilience were … Read more

Swapping objectives accelerates Davis-Yin splitting

In this work, we investigate the application of Davis–Yin splitting (DYS) to convex optimization problems and demonstrate that swapping the roles of the two nonsmooth convex functions can result in a faster convergence rate. Such a swap typically yields a different sequence of iterates, but its impact on convergence behavior has been largely understudied or … Read more

Optimized methods for composite optimization: a reduction perspective

Recent advances in convex optimization have leveraged computer-assisted proofs to develop optimized first-order methods that improve over classical algorithms. However, each optimized method is specially tailored for a particular problem setting, and it is a well-documented challenge to extend optimized methods to other settings due to their highly bespoke design and analysis. We provide a general … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

Dual certificates of primal cone membership

We discuss easily verifiable cone membership certificates, that is, certificates proving relations of the form \( b\in K \) for convex cones \(K\) that consist of vectors in the dual cone \(K^*\). Vectors in the dual cone are usually associated with separating hyperplanes, and so they are interpreted as certificates of non-membership in the standard … Read more

Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems

The part of the analysis of the convergence rate of the mirror descent method that is connected with the adaptive time-varying step size rules due to Alkousa et al. (MOTOR 2024, pp. 3-18) is corrected. Moreover, a Lipschitz-free mirror descent method that achieves weak ergodic convergence is presented, generalizing the convergence results of the mirror … Read more

Gradient Methods with Online Scaling Part I. Theoretical Foundations

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning … Read more

A symmetric extrapolated proximal alternating predictor-corrector method for saddle-point problems

The proximal alternating predictor-corrector (PAPC) method is a widely used first-order algorithm for solving convex-concave saddle-point problems involving both smooth and nonsmooth components. Unlike the primal-dual hybrid gradient (PDHG) method, which incorporates an extrapolation step with parameter $\theta \in (0,1]$ to improve convergence, the existing convergence analysis of PAPC has been limited to the case … Read more

Multi-cut stochastic approximation methods for solving stochastic convex composite optimization

The development of a multi-cut stochastic approximation (SA) method for solving stochastic convex composite optimization (SCCO) problems has remained an open challenge. The difficulty arises from the fact that the stochastic multi-cut model, constructed as the pointwise maximum of individual stochastic linearizations, provides a biased estimate of the objective function, with the error being uncontrollable. … Read more

An Adaptive and Parameter-Free Nesterov’s Accelerated Gradient Method for Convex Optimization

We propose AdaNAG, an adaptive accelerated gradient method based on Nesterov’s accelerated gradient method. AdaNAG is line-search-free, parameter-free, and achieves the accelerated convergence rates \( f(x_k) – f_\star = \mathcal{O}\left(1/k^2\right) \) and \( \min_{i\in\left\{1,\dots, k\right\}} \|\nabla f(x_i)\|^2 = \mathcal{O}\left(1/k^3\right) \) for \( L \)-smooth convex function \( f \). We provide a Lyapunov analysis for … Read more