Projected Stochastic Momentum Methods for Nonlinear Equality-Constrained Optimization for Machine Learning

Two algorithms are proposed, analyzed, and tested for solving continuous optimization problems with nonlinear equality constraints. Each is an extension of a stochastic momentum-based method from the unconstrained setting to the setting of a stochastic Newton-SQP-type algorithm for solving equality-constrained problems. One is an extension of the heavy-ball method and the other is an extension … Read more

Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs

Recent progress in LLM-driven algorithm discovery, exemplified by DeepMind’s AlphaEvolve, has produced new best-known solutions for a range of hard geometric and combinatorial problems. This raises a natural question: to what extent can modern off-the-shelf global optimization solvers match such results when the problems are formulated directly as nonlinear optimization problems (NLPs)? We revisit a … Read more

The Convexity Zoo: A Taxonomy of Function Classes in Optimization

The tractability of optimization problems depends critically on structural properties of the objective function. Convexity guarantees global optimality of local solutions and enables polynomial-time algorithms under mild assumptions, but many problems arising in modern applications—particularly in machine learning—are inherently nonconvex. Remarkably, a large class of such problems remains amenable to efficient optimization due to additional … Read more

A Proximal-Gradient Method for Solving Regularized Optimization Problems with General Constraints

We propose, analyze, and test a proximal-gradient method for solving regularized optimization problems with general constraints. The method employs a decomposition strategy to compute trial steps and uses a merit function to determine step acceptance or rejection. Under various assumptions, we establish a worst-case iteration complexity result, prove that limit points are first-order KKT points, … Read more

Robust optimality for nonsmooth mathematical programs with equilibrium constraints under data uncertainty

We develop a unified framework for robust nonsmooth optimization problems with equilibrium constraints (UNMPEC). As a foundation, we study a robust nonsmooth nonlinear program with uncertainty in both the objective function and the inequality constraints (UNP). Using Clarke subdifferentials, we establish Karush–Kuhn–Tucker (KKT)–type necessary optimality conditions under an extended no–nonzero–abnormal–multiplier constraint qualification (ENNAMCQ). When the … Read more

Subsampled cubic regularization method with distinct sample sizes for function, gradient, and Hessian

We develop and study a subsampled cubic regularization method for finite-sum composite optimization problems, in which the function, gradient, and Hessian are estimated using possibly different sample sizes. By allowing each quantity to have its own sampling strategy, the proposed method offers greater flexibility to control the accuracy of the model components and to better … Read more

Complexity of quadratic penalty methods with adaptive accuracy under a PL condition for the constraints

We study the quadratic penalty method (QPM) for smooth nonconvex optimization problems with equality constraints. Assuming the constraint violation satisfies the PL condition near the feasible set, we derive sharper worst-case complexity bounds for obtaining approximate first-order KKT points. When the objective and constraints are twice continuously differentiable, we show that QPM equipped with a … Read more

A derivative-free trust-region approach for Low Order-Value Optimization problems

The Low Order-Value Optimization (LOVO) problem involves minimizing the minimum among a finite number of function values within a feasible set. LOVO has several practical applications such as robust parameter estimation, protein alignment, portfolio optimization, among others. In this work, we are interested in the constrained nonlinear optimization LOVO problem of minimizing the minimum between … Read more

On the Convergence of Constrained Gradient Method

The constrained gradient method (CGM) has recently been proposed to solve convex optimization and monotone variational inequality (VI) problems with general functional constraints. While existing literature has established convergence results for CGM, the assumptions employed therein are quite restrictive; in some cases, certain assumptions are mutually inconsistent, leading to gaps in the underlying analysis. This … Read more