Constructing QCQP Instances Equivalent to Their SDP Relaxations

General quadratically constrained quadratic programs (QCQPs) are challenging to solve as they are known to be NP-hard. A popular approach to approximating QCQP solutions is to use semidefinite programming (SDP) relaxations. It is well-known that the optimal value η of the SDP relaxation problem bounds the optimal value ζ  of the QCQP from below, i.e., … Read more

A Study of First-Order Methods with a Probabilistic Relative-Error Gradient Oracle

This paper investigates the problem of minimizing a smooth function over a compact set with a probabilistic relative-error gradient oracle. The oracle succeeds with some probability, in which case it provides a relative-error approximation of the true gradient, or fails and returns an arbitrary vector, while the optimizer cannot distinguish between successful and failed queries … Read more

On liftings that improve convergence properties of Newton’s Method for Boundary Value Optimization Problems

The representation of a function in a higher-dimensional space is often referred to as lifting. Liftings can be used to reduce complexity. We are interested in the question of how liftings affect the local convergence of Newton’s method. We propose algorithms to construct liftings that potentially reduce the number of iterations via analysis of local … Read more

A Sound Local Regret Methodology for Online Nonconvex Composite Optimization

Online nonconvex optimization addresses dynamic and complex decision-making problems arising in real-world decision-making tasks where the optimizer’s objective evolves with the intricate and changing nature of the underlying system. This paper studies an online nonconvex composite optimization model with limited first-order access, encompassing a wide range of practical scenarios. We define local regret using a … Read more

An Interior-Point Algorithm for Continuous Nonlinearly Constrained Optimization with Noisy Function and Derivative Evaluations

An algorithm based on the interior-point methodology for solving continuous nonlinearly constrained optimization problems is proposed, analyzed, and tested. The distinguishing feature of the algorithm is that it presumes that only noisy values of the objective and constraint functions and their first-order derivatives are available. The algorithm is based on a combination of a previously … Read more

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and apply this analysis to develop new state-of-the-art adaptive gradient methods with empirical and … Read more

An Augmented Lagrangian Approach to Bi-Level Optimization via an Equilibrium Constrained Problem

Optimization problems involving equilibrium constraints capture diverse optimization settings such as bi-level optimization, min-max problems and games, and the minimization over non-linear constraints. This paper introduces an Augmented Lagrangian approach with Hessian-vector product approximation to address an equilibrium constrained nonconvex nonsmooth optimization problem. The underlying model in particular captures various settings of bi-level optimization problems, … Read more

Convergence of Descent Optimization Algorithms under Polyak-Lojasiewicz-Kurdyka Conditions

This paper develops a comprehensive convergence analysis for generic classes of descent algorithms in nonsmooth and nonconvex optimization under several conditions of the Polyak-Lojasiewicz-Kurdyka (PLK) type. Along other results, we prove the finite termination of generic algorithms under the PLK conditions with lower exponents. Specifications are given to establish new convergence rates for inexact reduced … Read more

prunAdag: an adaptive pruning-aware gradient method

A pruning-aware adaptive gradient method is proposed which classifies the variables in two sets before updating them using different strategies. This technique extends the “relevant/irrelevant” approach of Ding (2019) and Zimmer et al. (2022) and allows a posteriori sparsification of the solution of model parameter fitting problems. The new method is proved to be convergent … Read more

A necessary condition for the guarantee of the superiorization method

We study a method that involves principally convex feasibility-seeking and makes secondary efforts of objective function value reduction. This is the well-known superiorization method (SM), where the iterates of an asymptotically convergent iterative feasibility-seeking algorithm are perturbed by objective function nonascent steps. We investigate the question under what conditions a sequence generated by an SM … Read more