On Sum-Rules for Second-Order Contingent Derivatives

We are concerned with contingent derivatives and their second-order counterparts (introduced by Ngai et al.) of set-valued mappings. Special attention is given to the development of new sum-rules for second-order contingent derivatives. To be precise, we want to find conditions under which the second-order contingent derivative of the sum of a smooth and a set-valued … Read more

Spherical Support Vector Machine for Interval-Valued Data

In this work we propose a generalization of the Spherical Support Vector Machine method, in which the separator is a sphere, applied to Interval-valued data. This type of data belongs to a more general class, known as Symbolic Data, for which features are described by sets, intervals or histograms instead of classic arrays. This paradigm … Read more

Constructing QCQP Instances Equivalent to Their SDP Relaxations

General quadratically constrained quadratic programs (QCQPs) are challenging to solve as they are known to be NP-hard. A popular approach to approximating QCQP solutions is to use semidefinite programming (SDP) relaxations. It is well-known that the optimal value η of the SDP relaxation problem bounds the optimal value ζ  of the QCQP from below, i.e., … Read more

A Study of First-Order Methods with a Probabilistic Relative-Error Gradient Oracle

This paper investigates the problem of minimizing a smooth function over a compact set with a probabilistic relative-error gradient oracle. The oracle succeeds with some probability, in which case it provides a relative-error approximation of the true gradient, or fails and returns an arbitrary vector, while the optimizer cannot distinguish between successful and failed queries … Read more

On liftings that improve convergence properties of Newton’s Method for Boundary Value Optimization Problems

The representation of a function in a higher-dimensional space is often referred to as lifting. Liftings can be used to reduce complexity. We are interested in the question of how liftings affect the local convergence of Newton’s method. We propose algorithms to construct liftings that potentially reduce the number of iterations via analysis of local … Read more

A Sound Local Regret Methodology for Online Nonconvex Composite Optimization

Online nonconvex optimization addresses dynamic and complex decision-making problems arising in real-world decision-making tasks where the optimizer’s objective evolves with the intricate and changing nature of the underlying system. This paper studies an online nonconvex composite optimization model with limited first-order access, encompassing a wide range of practical scenarios. We define local regret using a … Read more

An Interior-Point Algorithm for Continuous Nonlinearly Constrained Optimization with Noisy Function and Derivative Evaluations

An algorithm based on the interior-point methodology for solving continuous nonlinearly constrained optimization problems is proposed, analyzed, and tested. The distinguishing feature of the algorithm is that it presumes that only noisy values of the objective and constraint functions and their first-order derivatives are available. The algorithm is based on a combination of a previously … Read more

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and apply this analysis to develop new state-of-the-art adaptive gradient methods with empirical and … Read more

An Augmented Lagrangian Approach to Bi-Level Optimization via an Equilibrium Constrained Problem

Optimization problems involving equilibrium constraints capture diverse optimization settings such as bi-level optimization, min-max problems and games, and the minimization over non-linear constraints. This paper introduces an Augmented Lagrangian approach with Hessian-vector product approximation to address an equilibrium constrained nonconvex nonsmooth optimization problem. The underlying model in particular captures various settings of bi-level optimization problems, … Read more

Convergence of Descent Optimization Algorithms under Polyak-Lojasiewicz-Kurdyka Conditions

This paper develops a comprehensive convergence analysis for generic classes of descent algorithms in nonsmooth and nonconvex optimization under several conditions of the Polyak-Lojasiewicz-Kurdyka (PLK) type. Along other results, we prove the finite termination of generic algorithms under the PLK conditions with lower exponents. Specifications are given to establish new convergence rates for inexact reduced … Read more