A Modified Projected Gradient Algorithm for Solving Quasiconvex Programming with Applications

In this manuscript, we introduce a novel projected gradient algorithm for solving quasiconvex optimization problems over closed convex sets. The key innovation of our new algorithm is an adaptive, parameter-free stepsize rule that requires no line search and avoids estimating constants, such as Lipschitz modulus. Unlike recent self-adaptive approach given in [17] which typically produce … Read more

Efficient Warm-Start Strategies for Nash-based Linear Complementarity Problems via Bilinear Approximation

We present an effective warm-starting scheme for solving large linear complementarity problems (LCPs) arising from Nash equilibrium problems. The approach generates high-quality starting points that, when passed to the PATH solver, yield substantial reductions in computational time and variance. Our warm-start routine reformulates each agent’s LP using strong duality, leading to a master problem with … Read more

Improved Analysis of Restarted Accelerated Gradient and Augmented Lagrangian Methods via Inexact Proximal Point Frameworks

This paper studies a class of double-loop (inner-outer) algorithms for convex composite optimization. For unconstrained problems, we develop a restarted accelerated composite gradient method that attains the optimal first-order complexity in both the convex and strongly convex settings. For linearly constrained problems, we introduce inexact augmented Lagrangian methods, including a basic method and an outer-accelerated … Read more

A General Penalty-Method and a General Regularization-Method for Cardinality-Constrained Optimization Problems

We consider cardinality-constrained optimization problems (CCOPs), which are general nonlinear programs with an additional constraint limiting the number of nonzero continuous variables. The continuous reformulation of CCOPs involves complementarity constraints, which pose significant theoretical and computational challenges. To address these difficulties, we propose and analyze two numerical solution approaches: a general penalty method and a … Read more

Curvature-oriented variance reduction methods for nonconvex stochastic optimization

When pursuing an approximate second-order stationary point in nonconvex constrained stochastic optimization, is it possible to design a stochastic second-order method that achieves the same sample complexity order as in the unconstrained setting? To address this question in this paper, we first introduce Carme, a curvature-oriented variance reduction method designed for unconstrained nonconvex stochastic optimization. … Read more

Voronoi Conditional Gradient Method for Constrained Nonconvex Optimization

The Conditional Gradient method offers a computationally efficient, projection-free framework for constrained problems; however, in nonconvex settings it may converge to stationary points of low quality. We propose the Voronoi Conditional Gradient (VCG) method, a geometric heuristic that systematically explores the feasible region by constructing adaptive Voronoi partitions from previously discovered stationary points. VCG incrementally … Read more

An objective-function-free algorithm for general smooth constrained optimization

A new algorithm for smooth constrained optimization is proposed that never computes the value of the problem’s objective function and that handles both equality and inequality constraints. The algorithm uses an adaptive switching strategy between a normal step aiming at reducing constraint’s infeasibility and a tangential step improving dual optimality, the latter being inspired by … Read more

Learning to Choose Branching Rules for Nonconvex MINLPs

Outer-approximation-based branch-and-bound is a common algorithmic framework for solving MINLPs (mixed-integer nonlinear programs) to global optimality, with branching variable selection critically influencing overall performance. In modern global MINLP solvers, it is unclear whether branching on fractional integer variables should be prioritized over spatial branching on variables, potentially continuous, that show constraint violations, with different solvers … Read more

Projected Stochastic Momentum Methods for Nonlinear Equality-Constrained Optimization for Machine Learning

Two algorithms are proposed, analyzed, and tested for solving continuous optimization problems with nonlinear equality constraints. Each is an extension of a stochastic momentum-based method from the unconstrained setting to the setting of a stochastic Newton-SQP-type algorithm for solving equality-constrained problems. One is an extension of the heavy-ball method and the other is an extension … Read more

Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs

Recent progress in LLM-driven algorithm discovery, exemplified by DeepMind’s AlphaEvolve, has produced new best-known solutions for a range of hard geometric and combinatorial problems. This raises a natural question: to what extent can modern off-the-shelf global optimization solvers match such results when the problems are formulated directly as nonlinear optimization problems (NLPs)? We revisit a … Read more