Supervised feature selection via multiobjective programming and its application in the medical field

In this study, we model the supervised feature selection problem using a novel approach: convex bi-objective optimization. Traditional methods have addressed this problem by maximizing relevance to class labels and minimizing redundancy among features. Recently, Wang et al. [30] formulated this problem as a single-objective convex optimization, yielding only a unique solution. Unlike that, we … Read more

A semi-smooth Newton method for the nonlinear conic problem with generalized simplicial cones

In this work we develop and analyze a semi-smooth Newton method for the general non- linear conic programming problem. In particular, we study the problem with a generalized simplicial cone, i.e., the image of a symmetric cone under a linear mapping. We generalize Robinson’s normal equations to a conic setting, yielding what we call the … Read more

Complexity of an inexact stochastic SQP algorithm for equality constrained optimization

In this paper, we consider nonlinear optimization problems with a stochastic objective function and deterministic equality constraints. We propose an inexact two-stepsize stochastic sequential quadratic programming (SQP) algorithm and analyze its worst-case complexity under mild assumptions. The method utilizes a step decomposition strategy and handles stochastic gradient estimates by assigning different stepsizes to different components … Read more

An objective-function-free algorithm for nonconvex stochastic optimization with deterministic equality and inequality constraints

An algorithm is proposed for solving optimization problems with stochastic objective and deterministic equality and inequality constraints. This algorithm is objective-function-free in the sense that it only uses the objective’s gradient and never evaluates the function value. It is based on an adaptive selection of function-decreasing and constraint-improving iterations, the first ones using an Adagrad-type … Read more

Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more

Strong convergence, perturbation resilience and superiorization of Generalized Modular String-Averaging with infinitely many input operators

We study the strong convergence and bounded perturbation resilience of iterative algorithms based on the Generalized Modular String-Averaging (GMSA) procedure for infinite sequences of input operators under a general admissible control. These methods address a variety of feasibility-seeking problems in real Hilbert spaces, including the common fixed point problem and the convex feasibility problem. In … Read more

A Successive Proximal DC Penalty Method with an Application to Mathematical Programs with Complementarity Constraints

We develop a successive, proximal difference-of-convex (DC) function penalty method for solving DC programs with DC constraints. The proposed approach relies on a DC penalty function that measures the violation of constraints and leads to a penalty reformulation sharing the same solution set as the original problem. The resulting penalty problem is a DC program … Read more

On Stationary Conditions and the Convergence of Augmented Lagrangian methods for Generalized Nash Equilibrium Problems

In this work, we study stationarity conditions and constraint qualifications (CQs) tailored to Generalized Nash Equilibrium Problems (GNEPs) and analyze their relationships and implications for the global convergence of algorithms. We recall that GNEPs generalize Nash Equilibrium Problems (NEPs) in that the feasible strategy set of each player depends on the strategies chosen by the … Read more

A Modified Projected Gradient Algorithm for Solving Quasiconvex Programming with Applications

In this manuscript, we introduce a novel projected gradient algorithm for solving quasiconvex optimization problems over closed convex sets. The key innovation of our new algorithm is an adaptive, parameter-free stepsize rule that requires no line search and avoids estimating constants, such as Lipschitz modulus. Unlike recent self-adaptive approach given in [17] which typically produce … Read more

Efficient Warm-Start Strategies for Nash-based Linear Complementarity Problems via Bilinear Approximation

We present an effective warm-starting scheme for solving large linear complementarity problems (LCPs) arising from Nash equilibrium problems. The approach generates high-quality starting points that, when passed to the PATH solver, yield substantial reductions in computational time and variance. Our warm-start routine reformulates each agent’s LP using strong duality, leading to a master problem with … Read more