Fast and Simple Multiclass Data Segmentation: An Eigendecomposition and Projection-Free Approach

Graph-based machine learning has seen an increased interest over the last decade with many connections to other fields of applied mathematics. Learning based on partial differential equations, such as the phase-field Allen-Cahn equation, allows efficient handling of semi-supervised learning approaches on graphs. The numerical solution of the graph Allen-Cahn equation via a convexity splitting or … Read more

On constraint qualifications for lower-level sets and an augmented Lagrangian method

In this paper we consider an augmented Lagrangian method with general lower-level constraints, that is, where some of the constraints are penalized while others are kept as subproblem constraints. Motivated by some recent results on optimization problems on manifolds, we present a general theory of global convergence when a feasible approximate KKT point is found … Read more

Robust optimality for nonsmooth mathematical programs with equilibrium constraints under data uncertainty

We develop a unified framework for robust nonsmooth optimization problems with equilibrium constraints (UNMPEC). As a foundation, we study a robust nonsmooth nonlinear program with uncertainty in both the objective function and the inequality constraints (UNP). Using Clarke subdifferentials, we establish Karush–Kuhn–Tucker (KKT)–type necessary optimality conditions under an extended no–nonzero–abnormal–multiplier constraint qualification (ENNAMCQ). When the … Read more

New Results on the Polyak Stepsize: Tight Convergence Analysis and Universal Function Classes

In this paper, we revisit a classical adaptive stepsize strategy for gradient descent: the Polyak stepsize (PolyakGD), originally proposed in Polyak (1969). We study the convergence behavior of PolyakGD from two perspectives: tight worst-case analysis and universality across function classes. As our first main result, we establish the tightness of the known convergence rates of … Read more

Subsampled cubic regularization method with distinct sample sizes for function, gradient, and Hessian

We develop and study a subsampled cubic regularization method for finite-sum composite optimization problems, in which the function, gradient, and Hessian are estimated using possibly different sample sizes. By allowing each quantity to have its own sampling strategy, the proposed method offers greater flexibility to control the accuracy of the model components and to better … Read more

Exact Decentralized Optimization via Explicit $\ell_1$ Consensus Penalties

Consensus optimization enables autonomous agents to solve joint tasks through peer-to-peer exchanges alone. Classical decentralized gradient descent is appealing for its minimal state but fails to achieve exact consensus with fixed stepsizes unless additional trackers or dual variables are introduced. We revisit penalty methods and introduce a decentralized two-layer framework that couples an outer penalty-continuation … Read more

Complexity of quadratic penalty methods with adaptive accuracy under a PL condition for the constraints

We study the quadratic penalty method (QPM) for smooth nonconvex optimization problems with equality constraints. Assuming the constraint violation satisfies the PL condition near the feasible set, we derive sharper worst-case complexity bounds for obtaining approximate first-order KKT points. When the objective and constraints are twice continuously differentiable, we show that QPM equipped with a … Read more

A derivative-free trust-region approach for Low Order-Value Optimization problems

The Low Order-Value Optimization (LOVO) problem involves minimizing the minimum among a finite number of function values within a feasible set. LOVO has several practical applications such as robust parameter estimation, protein alignment, portfolio optimization, among others. In this work, we are interested in the constrained nonlinear optimization LOVO problem of minimizing the minimum between … Read more

On Solving Chance-Constrained Models with Gaussian Mixture Distribution

We study linear chance-constrained problems where the coefficients follow a Gaussian mixture distribution. We provide mixed-binary quadratic programs that give inner and outer approximations of the chance constraint based on piecewise linear approximations of the standard normal cumulative density function. We show that $O\left(\sqrt{\ln(1/\tau)/\tau} \right)$ pieces are sufficient to attain $\tau$-accuracy in the chance constraint. … Read more

On the Convergence of Constrained Gradient Method

The constrained gradient method (CGM) has recently been proposed to solve convex optimization and monotone variational inequality (VI) problems with general functional constraints. While existing literature has established convergence results for CGM, the assumptions employed therein are quite restrictive; in some cases, certain assumptions are mutually inconsistent, leading to gaps in the underlying analysis. This … Read more