Second order directional derivative of optimal solution function in parametric programming problem

In this paper, the second-order directional derivative of the optimal value function and the optimal solution function are obtained for a strongly stable parametric problem with non-unique Lagrange multipliers. Some properties of the Lagrange multipliers are proved. It is justified that the second-order directional derivative of the optimal solution function for the parametric problem can … Read more

A Primal Approach to Facial Reduction for SDP Relaxations of Combinatorial Optimization Problems

We propose a novel facial reduction algorithm tailored to semidefinite programming relaxations of combinatorial optimization problems with quadratic objective functions. Our method leverages the specific structure of these relaxations, particularly the availability of feasible solutions that can often be generated efficiently in practice. By incorporating such solutions into the facial reduction process, we substantially simplify … Read more

Non-smooth stochastic gradient descent using smoothing functions

In this paper, we address stochastic optimization problems involving a composition of a non-smooth outer function and a smooth inner function, a formulation frequently encountered in machine learning and operations research. To deal with the non-differentiability of the outer function, we approximate the original non-smooth function using smoothing functions, which are continuously differentiable and approach … Read more

Cooperative vs Noncooperative Scenarios in multi-objective Potential games: the multi-portfolio context

We focus on multi-agent, multi-objective problems, particularly on those where the objectives admit a potential structure. We show that the solution to the potential multi-objective problem is always a noncooperative optimum for the multi-agent setting. Furthermore, we identify a class of problems for which every noncooperative solution can be computed via the potential problem. We … Read more

A new insight on the prediction-correction framework with applications to first-order methods

The prediction-correction framework developed in [B. He, Splitting Contraction Algorithm for Convex Optimization, Science Press, 2025] is a simple yet powerful tool for analyzing the convergence of diverse first-order optimization methods, including the Augmented Lagrangian Method (ALM) and the Alternating Direction Method of Multipliers (ADMM). In this paper, we propose a generalized prediction-correction framework featuring … Read more

A strongly convergent projection and contraction algorithm with extrapolations from the past

This paper introduces a projection and contraction-type algorithm that features an extrapolation from the past, reducing the two values of the cost operator inherent in the original projection and contraction algorithm to a single value at the current iteration. Strong convergence results of the proposed algorithm are proved in Hilbert spaces. Experimental results on testing … Read more

A first-order method for nonconvex-nonconcave minimax problems under a local Kurdyka-Łojasiewicz condition

We study a class of nonconvex–nonconcave minimax problems in which the inner maximization problem satisfies a local Kurdyka–Łojasiewicz (KL) condition that may vary with the outer minimization variable. In contrast to the global KL or Polyak–Łojasiewicz (PL) conditions commonly assumed in the literature—which are significantly stronger and often too restrictive in practice—this local KL condition … Read more

General Perturbation Resilient Dynamic String-Averaging for Inconsistent Problems with Superiorization

In this paper we introduce a General Dynamic String-Averaging (GDSA) iterative scheme and investigate its convergence properties in the inconsistent case, that is, when the input operators don’t have a common fixed point. The Dynamic String-Averaging Projection (DSAP) algorithm itself was introduced in an 2013 paper, where its strong convergence and bounded perturbation resilience were … Read more

Swapping objectives accelerates Davis-Yin splitting

In this work, we investigate the application of Davis–Yin splitting (DYS) to convex optimization problems and demonstrate that swapping the roles of the two nonsmooth convex functions can result in a faster convergence rate. Such a swap typically yields a different sequence of iterates, but its impact on convergence behavior has been largely understudied or … Read more

Preconditioning for rational approximation

In this paper, we show that minimax rational approximations can be enhanced by introducing a controlling parameter on the denominator of the rational function. This is implemented by adding a small set of linear constraints to the underlying optimization problem. The modification integrates naturally into approximation models formulated as linear programming problems. We demonstrate our … Read more