Tilt Stability on Riemannian Manifolds with Application to Convergence Analysis of Generalized Riemannian Newton Method

We generalize tilt stability, a fundamental concept in perturbation analysis of optimization problems in Euclidean spaces, to the setting of Riemannian manifolds. We prove the equivalence of the following conditions: Riemannian tilt stability, Riemannian variational strong convexity, Riemannian uniform quadratic growth, local strong monotonicity of Riemannian subdifferential, strong metric regularity of Riemannian subdifferential, and positive … Read more

An efficient penalty decomposition algorithm for minimization over sparse symmetric sets

This paper proposes an improved quasi-Newton penalty decomposition algorithm for the minimization of continuously differentiable functions, possibly nonconvex, over sparse symmetric sets. The method solves a sequence of penalty subproblems approximately via a two-block decomposition scheme: the first subproblem admits a closed-form solution without sparsity constraints, while the second subproblem is handled through an efficient … Read more

Projected Stochastic Momentum Methods for Nonlinear Equality-Constrained Optimization for Machine Learning

Two algorithms are proposed, analyzed, and tested for solving continuous optimization problems with nonlinear equality constraints. Each is an extension of a stochastic momentum-based method from the unconstrained setting to the setting of a stochastic Newton-SQP-type algorithm for solving equality-constrained problems. One is an extension of the heavy-ball method and the other is an extension … Read more

A single loop method for quadratic minmax optimization

We consider a quadratic minmax problem with coupled inner constraints and propose a method to compute a class of stationary points. To motivate the need to compute such stationary points, we first show that they are meaningful, in the sense that they can be locally optimal for our problem under suitable linear independence and second-order … Read more

First-order Methods for Unconstrained Vector Optimization Problems: A Unified Majorization-Minimization Perspective

In this paper, we develop a unified majorization-minimization scheme and convergence analysis with first-order surrogate functions for unconstrained vector optimization problems (VOPs). By selecting different surrogate functions, the unified method can be reduced to various existing first-order methods. The unified convergence analysis reveals that the slow convergence of the steepest descent method is primarily attributed … Read more

Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs

Recent progress in LLM-driven algorithm discovery, exemplified by DeepMind’s AlphaEvolve, has produced new best-known solutions for a range of hard geometric and combinatorial problems. This raises a natural question: to what extent can modern off-the-shelf global optimization solvers match such results when the problems are formulated directly as nonlinear optimization problems (NLPs)? We revisit a … Read more

A Majorization-Minimization approach for multiclass classification in a big data scenario

This work presents a novel optimization approach for training linear classifiers in multiclass classification tasks, when focusing on a regularized and smooth Weston-Watkins support vector machine (SVM) model. We propose a Majorization-Minimization (MM) algorithm to solve the resulting, Lipschitz-differentiable, optimization problem. To enhance scalability of the algorithm when tackling large datasets, we introduce an incremental … Read more

The Maximum Clique Problem under Adversarial Uncertainty: a min-max approach

We analyze the problem of identifying large cliques in graphs that are affected by adversarial uncertainty. More specifically, we consider a new formulation, namely the adversarial maximum clique problem, which extends the classical maximum-clique problem to graphs with edges strategically perturbed by an adversary. The proposed mathematical model is thus formulated as a two-player zero-sum … Read more

A speed up strategy for gradient methods

In this paper, we propose a new acceleration strategy for gradient-based methods applied to strictly convex Quadratic Programming (QP) problems. The strategy consists in performing, at selected iterations, minimization steps along alternative descent directions or even within low-dimensional affine subspaces. In particular, considering the contribution of the linear and quadratic part of the objective function … Read more

An Inexact Modified Quasi-Newton Method for Nonsmooth Regularized Optimization

We introduce method iR2N, a modified proximal quasi-Newton method for minimizing the sum of a \(C^1\) function \(f\) and a lower semi-continuous prox-bounded \(h\) that permits inexact evaluations of \(f\), \(\nabla f\) and of the relevant proximal operators. Both \(f\) and \(h\) may be nonconvex. In applications where the proximal operator of \(h\) is not … Read more