Bregman Regularized Proximal Point Methods for Computing Projected Solutions of Quasi-equilibrium Problems

In this paper, we propose two Bregman regularized proximal point methods that provide flexibility to compute projected solutions for quasi-equilibrium problems. Each method has one Bregman projection onto the feasible set and the regularized equilibrium problem. Under standard assumptions, we prove that the methods are well-defined and that the sequences they generate converge to a … Read more

Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more

Strong convergence, perturbation resilience and superiorization of Generalized Modular String-Averaging with infinitely many input operators

We study the strong convergence and bounded perturbation resilience of iterative algorithms based on the Generalized Modular String-Averaging (GMSA) procedure for infinite sequences of input operators under a general admissible control. These methods address a variety of feasibility-seeking problems in real Hilbert spaces, including the common fixed point problem and the convex feasibility problem. In … Read more

Solving Chance Constrained Programs via a Penalty based Difference of Convex Approach

We develop two penalty based difference of convex (DC) algorithms for solving chance constrained programs. First, leveraging a rank-based DC decomposition of the chance constraint, we propose a proximal penalty based DC algorithm in the primal space that does not require a feasible initialization. Second, to improve numerical stability in the general nonlinear settings, we … Read more

A Successive Proximal DC Penalty Method with an Application to Mathematical Programs with Complementarity Constraints

We develop a successive, proximal difference-of-convex (DC) function penalty method for solving DC programs with DC constraints. The proposed approach relies on a DC penalty function that measures the violation of constraints and leads to a penalty reformulation sharing the same solution set as the original problem. The resulting penalty problem is a DC program … Read more

On the Complexity of Subgradient Methods for Trilevel Hierarchical Generalized Variational Inequalities

We study generalized variational inequalities with a three-level hierarchical structure. This setting extends nested GVI models beyond the bilevel case, for which $\mathcal{O}(\delta^{-4})$ complexity bounds are known for any prescribed positive tolerance $\delta$, to a fully three-level hierarchical structure. We analyze a projected averaged subgradient method combined with a Tikhonov-like regularization scheme. Under compactness, maximal … Read more

On Stationary Conditions and the Convergence of Augmented Lagrangian methods for Generalized Nash Equilibrium Problems

In this work, we study stationarity conditions and constraint qualifications (CQs) tailored to Generalized Nash Equilibrium Problems (GNEPs) and analyze their relationships and implications for the global convergence of algorithms. We recall that GNEPs generalize Nash Equilibrium Problems (NEPs) in that the feasible strategy set of each player depends on the strategies chosen by the … Read more

A Multi-Secant Limited-Memory BFGS Method

We develop multi-secant BFGS-like quasi-Newton updating scheme, which adaptively selects the number of imposed secant conditions and naturally preserves positivity of approximated Hessian. Compact representation and respective limited-memory formulation are also derived. Numerical stability is assured via unconventional damping technique, which symmetrically handles coordinate and gradient differences. Practical relevance of proposed method is demonstrated via … Read more

A Modified Projected Gradient Algorithm for Solving Quasiconvex Programming with Applications

In this manuscript, we introduce a novel projected gradient algorithm for solving quasiconvex optimization problems over closed convex sets. The key innovation of our new algorithm is an adaptive, parameter-free stepsize rule that requires no line search and avoids estimating constants, such as Lipschitz modulus. Unlike recent self-adaptive approach given in [17] which typically produce … Read more

Negative Curvature Methods with High-Probability Complexity Guarantees for Stochastic Nonconvex Optimization

This paper develops negative curvature methods for continuous nonlinear unconstrained optimization in stochastic settings, in which function, gradient, and Hessian information is available only through probabilistic oracles, i.e., oracles that return approximations of a certain accuracy and reliability. We introduce conditions on these oracles and design a two-step framework that systematically combines gradient and negative … Read more