Complexity of an inexact stochastic SQP algorithm for equality constrained optimization

In this paper, we consider nonlinear optimization problems with a stochastic objective function and deterministic equality constraints. We propose an inexact two-stepsize stochastic sequential quadratic programming (SQP) algorithm and analyze its worst-case complexity under mild assumptions. The method utilizes a step decomposition strategy and handles stochastic gradient estimates by assigning different stepsizes to different components … Read more

An objective-function-free algorithm for nonconvex stochastic optimization with deterministic equality and inequality constraints

An algorithm is proposed for solving optimization problems with stochastic objective and deterministic equality and inequality constraints. This algorithm is objective-function-free in the sense that it only uses the objective’s gradient and never evaluates the function value. It is based on an adaptive selection of function-decreasing and constraint-improving iterations, the first ones using an Adagrad-type … Read more

Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more

Strong convergence, perturbation resilience and superiorization of Generalized Modular String-Averaging with infinitely many input operators

We study the strong convergence and bounded perturbation resilience of iterative algorithms based on the Generalized Modular String-Averaging (GMSA) procedure for infinite sequences of input operators under a general admissible control. These methods address a variety of feasibility-seeking problems in real Hilbert spaces, including the common fixed point problem and the convex feasibility problem. In … Read more

A Successive Proximal DC Penalty Method with an Application to Mathematical Programs with Complementarity Constraints

We develop a successive, proximal difference-of-convex (DC) function penalty method for solving DC programs with DC constraints. The proposed approach relies on a DC penalty function that measures the violation of constraints and leads to a penalty reformulation sharing the same solution set as the original problem. The resulting penalty problem is a DC program … Read more

On Stationary Conditions and the Convergence of Augmented Lagrangian methods for Generalized Nash Equilibrium Problems

In this work, we study stationarity conditions and constraint qualifications (CQs) tailored to Generalized Nash Equilibrium Problems (GNEPs) and analyze their relationships and implications for the global convergence of algorithms. We recall that GNEPs generalize Nash Equilibrium Problems (NEPs) in that the feasible strategy set of each player depends on the strategies chosen by the … Read more

A Modified Projected Gradient Algorithm for Solving Quasiconvex Programming with Applications

In this manuscript, we introduce a novel projected gradient algorithm for solving quasiconvex optimization problems over closed convex sets. The key innovation of our new algorithm is an adaptive, parameter-free stepsize rule that requires no line search and avoids estimating constants, such as Lipschitz modulus. Unlike recent self-adaptive approach given in [17] which typically produce … Read more

Efficient Warm-Start Strategies for Nash-based Linear Complementarity Problems via Bilinear Approximation

We present an effective warm-starting scheme for solving large linear complementarity problems (LCPs) arising from Nash equilibrium problems. The approach generates high-quality starting points that, when passed to the PATH solver, yield substantial reductions in computational time and variance. Our warm-start routine reformulates each agent’s LP using strong duality, leading to a master problem with … Read more

Improved Analysis of Restarted Accelerated Gradient and Augmented Lagrangian Methods via Inexact Proximal Point Frameworks

This paper studies a class of double-loop (inner-outer) algorithms for convex composite optimization. For unconstrained problems, we develop a restarted accelerated composite gradient method that attains the optimal first-order complexity in both the convex and strongly convex settings. For linearly constrained problems, we introduce inexact augmented Lagrangian methods, including a basic method and an outer-accelerated … Read more

A General Penalty-Method and a General Regularization-Method for Cardinality-Constrained Optimization Problems

We consider cardinality-constrained optimization problems (CCOPs), which are general nonlinear programs with an additional constraint limiting the number of nonzero continuous variables. The continuous reformulation of CCOPs involves complementarity constraints, which pose significant theoretical and computational challenges. To address these difficulties, we propose and analyze two numerical solution approaches: a general penalty method and a … Read more