Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more

Strong convergence, perturbation resilience and superiorization of Generalized Modular String-Averaging with infinitely many input operators

We study the strong convergence and bounded perturbation resilience of iterative algorithms based on the Generalized Modular String-Averaging (GMSA) procedure for infinite sequences of input operators under a general admissible control. These methods address a variety of feasibility-seeking problems in real Hilbert spaces, including the common fixed point problem and the convex feasibility problem. In … Read more

A Successive Proximal DC Penalty Method with an Application to Mathematical Programs with Complementarity Constraints

We develop a successive, proximal difference-of-convex (DC) function penalty method for solving DC programs with DC constraints. The proposed approach relies on a DC penalty function that measures the violation of constraints and leads to a penalty reformulation sharing the same solution set as the original problem. The resulting penalty problem is a DC program … Read more

On Stationary Conditions and the Convergence of Augmented Lagrangian methods for Generalized Nash Equilibrium Problems

In this work, we study stationarity conditions and constraint qualifications (CQs) tailored to Generalized Nash Equilibrium Problems (GNEPs) and analyze their relationships and implications for the global convergence of algorithms. We recall that GNEPs generalize Nash Equilibrium Problems (NEPs) in that the feasible strategy set of each player depends on the strategies chosen by the … Read more

A Modified Projected Gradient Algorithm for Solving Quasiconvex Programming with Applications

In this manuscript, we introduce a novel projected gradient algorithm for solving quasiconvex optimization problems over closed convex sets. The key innovation of our new algorithm is an adaptive, parameter-free stepsize rule that requires no line search and avoids estimating constants, such as Lipschitz modulus. Unlike recent self-adaptive approach given in [17] which typically produce … Read more

Efficient Warm-Start Strategies for Nash-based Linear Complementarity Problems via Bilinear Approximation

We present an effective warm-starting scheme for solving large linear complementarity problems (LCPs) arising from Nash equilibrium problems. The approach generates high-quality starting points that, when passed to the PATH solver, yield substantial reductions in computational time and variance. Our warm-start routine reformulates each agent’s LP using strong duality, leading to a master problem with … Read more

Improved Analysis of Restarted Accelerated Gradient and Augmented Lagrangian Methods via Inexact Proximal Point Frameworks

This paper studies a class of double-loop (inner-outer) algorithms for convex composite optimization. For unconstrained problems, we develop a restarted accelerated composite gradient method that attains the optimal first-order complexity in both the convex and strongly convex settings. For linearly constrained problems, we introduce inexact augmented Lagrangian methods, including a basic method and an outer-accelerated … Read more

A General Penalty-Method and a General Regularization-Method for Cardinality-Constrained Optimization Problems

We consider cardinality-constrained optimization problems (CCOPs), which are general nonlinear programs with an additional constraint limiting the number of nonzero continuous variables. The continuous reformulation of CCOPs involves complementarity constraints, which pose significant theoretical and computational challenges. To address these difficulties, we propose and analyze two numerical solution approaches: a general penalty method and a … Read more

Curvature-oriented variance reduction methods for nonconvex stochastic optimization

When pursuing an approximate second-order stationary point in nonconvex constrained stochastic optimization, is it possible to design a stochastic second-order method that achieves the same sample complexity order as in the unconstrained setting? To address this question in this paper, we first introduce Carme, a curvature-oriented variance reduction method designed for unconstrained nonconvex stochastic optimization. … Read more

Voronoi Conditional Gradient Method for Constrained Nonconvex Optimization

The Conditional Gradient method offers a computationally efficient, projection-free framework for constrained problems; however, in nonconvex settings it may converge to stationary points of low quality. We propose the Voronoi Conditional Gradient (VCG) method, a geometric heuristic that systematically explores the feasible region by constructing adaptive Voronoi partitions from previously discovered stationary points. VCG incrementally … Read more