An Adaptive and Parameter-Free Nesterov’s Accelerated Gradient Method for Convex Optimization

We propose AdaNAG, an adaptive accelerated gradient method based on Nesterov’s accelerated gradient method. AdaNAG is line-search-free, parameter-free, and achieves the accelerated convergence rates \( f(x_k) – f_\star = \mathcal{O}\left(1/k^2\right) \) and \( \min_{i\in\left\{1,\dots, k\right\}} \|\nabla f(x_i)\|^2 = \mathcal{O}\left(1/k^3\right) \) for \( L \)-smooth convex function \( f \). We provide a Lyapunov analysis for … Read more

Steepest descent method using novel adaptive stepsizes for unconstrained nonlinear multiobjective programming

We propose new adaptive strategies to compute stepsizes for the steepest descent method to solve unconstrained nonlinear multiobjective optimization problems without employing any linesearch procedure. The resulting algorithms can be applied to a wide class of nonconvex unconstrained multi-criteria optimization problems satisfying a global Lipschitz continuity condition imposed on the gradients of all objectives. In … Read more

A Symmetric Primal-Dual method with two extrapolation steps for Composite Convex Optimization

Symmetry is a recurring feature in algorithms for monotone operator theory and convex optimization, particularly in problems involving the sum of two operators, as exemplified by the Peaceman–Rachford splitting scheme. However, in more general settings—such as composite optimization problems with three convex functions or structured convex-concave saddle-point formulations—existing algorithms often exhibit inherent asymmetry. In particular, … Read more

A double-accelerated proximal augmented Lagrangian method with applications in signal reconstruction

The Augmented Lagrangian Method (ALM), firstly proposed in 1969, remains a vital framework in large-scale constrained optimization. This paper addresses a linearly constrained composite convex minimization problem and presents a general proximal ALM that incorporates both Nesterov acceleration and relaxed acceleration, while enjoying indefinite proximal terms. Under mild assumptions (potentially without requiring prior knowledge of … Read more

Negative Stepsizes Make Gradient-Descent-Ascent Converge

Efficient computation of min-max problems is a central question in optimization, learning, games, and controls. Arguably the most natural algorithm is gradient-descent-ascent (GDA). However, since the 1970s, conventional wisdom has argued that GDA fails to converge even on simple problems. This failure spurred an extensive literature on modifying GDA with additional building blocks such as … Read more

On the Acceleration of Proximal Bundle Methods

The proximal bundle method (PBM) is a fundamental and computationally effective algorithm for solving nonsmooth optimization problems. In this paper, we present the first variant of the PBM for smooth objectives, achieving an accelerated convergence rate of \(\frac{1}{\sqrt{\epsilon}}\log(\frac{1}{\epsilon})\), where \(\epsilon\) is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of … Read more

A relaxed version of Ryu’s three-operator splitting method for structured nonconvex optimization

In this work, we propose a modification of Ryu’s splitting algorithm for minimizing the sum of three functions, where two of them are convex with Lipschitz continuous gradients, and the third is an arbitrary proper closed function that is not necessarily convex. The modification is essential to facilitate the convergence analysis, particularly in establishing a … Read more

An iterative process for the feasibility-seeking problem with sets that are unions of convex sets

In this paper we deal with the feasibility-seeking problem for unions of convex sets (UCS) sets and propose an iterative process for its solution. Renewed interest in this problem stems from the fact that it was recently discovered to serve as a modeling approach in fields of applications and from the ongoing recent research efforts … Read more

Extending Exact Convex Relaxations of Quadratically Constrained Quadratic Programs

A convex relaxation of a quadratically constrained quadratic program (QCQP) is called exact if it has a rank-$1$ optimal solution that corresponds to an optimal solution of the  QCQP. Given a QCQP whose convex relaxation is exact,  this paper investigates the incorporation of additional quadratic inequality constraints under a non-intersecting quadratic constraint condition while maintaining … Read more

qpBAMM: a parallelizable ADMM approach for block-structured quadratic programs

Block-structured quadratic programs (QPs) frequently arise in the context of the direct approach to solving optimal control problems. For successful application of direct optimal control algorithms to many real-world problems it is paramount that these QPs can be solved efficiently and reliably. Besides interior-point methods and active-set methods, ADMM-based quadratic programming approaches have gained popularity. … Read more