The Maximum Clique Problem under Adversarial Uncertainty: a min-max approach

We analyze the problem of identifying large cliques in graphs that are affected by adversarial uncertainty. More specifically, we consider a new formulation, namely the adversarial maximum clique problem, which extends the classical maximum-clique problem to graphs with edges strategically perturbed by an adversary. The proposed mathematical model is thus formulated as a two-player zero-sum … Read more

An Inexact Modified Quasi-Newton Method for Nonsmooth Regularized Optimization

We introduce method iR2N, a modified proximal quasi-Newton method for minimizing the sum of a \(C^1\) function \(f\) and a lower semi-continuous prox-bounded \(h\) that permits inexact evaluations of \(f\), \(\nabla f\) and of the relevant proximal operators. Both \(f\) and \(h\) may be nonconvex. In applications where the proximal operator of \(h\) is not … Read more

A Proximal-Gradient Method for Solving Regularized Optimization Problems with General Constraints

We propose, analyze, and test a proximal-gradient method for solving regularized optimization problems with general constraints. The method employs a decomposition strategy to compute trial steps and uses a merit function to determine step acceptance or rejection. Under various assumptions, we establish a worst-case iteration complexity result, prove that limit points are first-order KKT points, … Read more

An Elementary Proof of the Near Optimality of LogSumExp Smoothing

We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must … Read more

Robust optimality for nonsmooth mathematical programs with equilibrium constraints under data uncertainty

We develop a unified framework for robust nonsmooth optimization problems with equilibrium constraints (UNMPEC). As a foundation, we study a robust nonsmooth nonlinear program with uncertainty in both the objective function and the inequality constraints (UNP). Using Clarke subdifferentials, we establish Karush–Kuhn–Tucker (KKT)–type necessary optimality conditions under an extended no–nonzero–abnormal–multiplier constraint qualification (ENNAMCQ). When the … Read more

New Results on the Polyak Stepsize: Tight Convergence Analysis and Universal Function Classes

In this paper, we revisit a classical adaptive stepsize strategy for gradient descent: the Polyak stepsize (PolyakGD), originally proposed in Polyak (1969). We study the convergence behavior of PolyakGD from two perspectives: tight worst-case analysis and universality across function classes. As our first main result, we establish the tightness of the known convergence rates of … Read more

Subsampled cubic regularization method with distinct sample sizes for function, gradient, and Hessian

We develop and study a subsampled cubic regularization method for finite-sum composite optimization problems, in which the function, gradient, and Hessian are estimated using possibly different sample sizes. By allowing each quantity to have its own sampling strategy, the proposed method offers greater flexibility to control the accuracy of the model components and to better … Read more

Subgame Perfect Methods in Nonsmooth Convex Optimization

This paper considers nonsmooth convex optimization with either a subgradient or proximal operator oracle. In both settings, we identify algorithms that achieve the recently introduced game-theoretic optimality notion for algorithms known as subgame perfection. Subgame perfect algorithms meet a more stringent requirement than just minimax optimality. Not only must they provide optimal uniform guarantees on … Read more

Preconditioned subgradient method for composite optimization: overparameterization and fast convergence

Composite optimization problems involve minimizing the composition of a smooth map with a convex function. Such objectives arise in numerous data science and signal processing applications, including phase retrieval, blind deconvolution, and collaborative filtering. The subgradient method achieves local linear convergence when the composite loss is well-conditioned. However, if the smooth map is, in a … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more