Supervised feature selection via multiobjective programming and its application in the medical field

In this study, we model the supervised feature selection problem using a novel approach: convex bi-objective optimization. Traditional methods have addressed this problem by maximizing relevance to class labels and minimizing redundancy among features. Recently, Wang et al. [30] formulated this problem as a single-objective convex optimization, yielding only a unique solution. Unlike that, we … Read more

Second-Order Optimality Conditions for Bilevel Optimization Problems Using Parabolic Directional Derivatives

This paper studies the second-order properties of a class of inequality-constrained bilevel programming problems. First-order optimality conditions for the existence of solutions to bilevel optimization problems are derived using the first-order directional derivative of the optimal solution function of the lower-level problem in the seminal paper by Dempe (1992). In this work, we prove that … Read more

Convergence of the Frank-Wolfe Algorithm for Monotone Variational Inequalities

We consider the Frank-Wolfe algorithm for solving variational inequalities over compact, convex sets under a monotone \(C^1\) operator and vanishing, nonsummable step sizes. We introduce a continuous-time interpolation of the discrete iteration and use tools from dynamical systems theory to analyze its asymptotic behavior. This allows us to derive convergence results for the original discrete … Read more

Negative Momentum for Convex-Concave Optimization

This paper revisits momentum in the context of min-max optimization. Momentum is a celebrated mechanism for accelerating gradient dynamics in settings like convex minimization, but its direct use in min-max optimization makes gradient dynamics diverge. Surprisingly, Gidel et al. 2019 showed that negative momentum can help fix convergence. However, despite these promising initial results and … Read more

A unified framework for inexact adaptive stepsizes in the gradient methods, the conjugate gradient methods and the quasi-Newton methods for strictly convex quadratic optimization

The inexact adaptive stepsizes for the conjugate gradient method and  the quasi-Newton method are very rare. The exact stepsizes in the gradient method, the conjugate gradient method and the  quasi-Newton method for strictly convex quadratic optimization have a unified framework, while the unified framework for inexact adaptive stepsizes  in the gradient method, the conjugate gradient … Read more

Decomposition-Based Reformulation of Nonseparable Quadratic Expressions in Convex MINLP

In this paper, we present a reformulation technique for convex mixed-integer nonlinear programming (MINLP) problems with nonseparable quadratic terms. For each convex non-diagonal matrix that defines quadratic expressions in the problem, we show that an eigenvalue or LDLT decomposition can be performed to transform the quadratic expressions into convex additively separable constraints. The reformulated problem … Read more

Bregman Regularized Proximal Point Methods for Computing Projected Solutions of Quasi-equilibrium Problems

In this paper, we propose two Bregman regularized proximal point methods that provide flexibility to compute projected solutions for quasi-equilibrium problems. Each method has one Bregman projection onto the feasible set and the regularized equilibrium problem. Under standard assumptions, we prove that the methods are well-defined and that the sequences they generate converge to a … Read more

Normal cones and subdifferentials at infinity for convex analysis and optimization

Motivated by recent developments, this paper further investigates normal cones and subdifferentials at infinity within the framework of convex analysis. We establish fundamental properties of these constructions and derive basic calculus rules. The obtained results extend and refine existing concepts in variational analysis and nonsmooth optimization, providing new insights into the asymptotic structure of functions … Read more

Preconditioned Proximal Gradient Methods with Conjugate Momentum: A Subspace Perspective

In this paper, we propose a descent method for composite optimization problems with linear operators. Specifically, we first design a structure-exploiting preconditioner tailored to the linear operator so that the resulting preconditioned proximal subproblem admits a closed-form solution through its dual formulation. However, such a structure-driven preconditioner may be poorly aligned with the local curvature … Read more

Strong convergence, perturbation resilience and superiorization of Generalized Modular String-Averaging with infinitely many input operators

We study the strong convergence and bounded perturbation resilience of iterative algorithms based on the Generalized Modular String-Averaging (GMSA) procedure for infinite sequences of input operators under a general admissible control. These methods address a variety of feasibility-seeking problems in real Hilbert spaces, including the common fixed point problem and the convex feasibility problem. In … Read more