Adaptive Conditional Gradient Descent

Selecting an effective step-size is a fundamental challenge in first-order optimization, especially for problems with non-Euclidean geometries. This paper presents a novel adaptive step-size strategy for optimization algorithms that rely on linear minimization oracles, as used in the Conditional Gradient or non-Euclidean Normalized Steepest Descent algorithms. Using a simple heuristic to estimate a local Lipschitz … Read more

On the Complexity of Lower-Order Implementations of Higher-Order Methods

In this work, we propose a method for minimizing non-convex functions with Lipschitz continuous \(p\)th-order derivatives, starting from \(p \geq 1\). The method, however, only requires derivative information up to order \((p-1)\), since the \(p\)th-order derivatives are approximated via finite differences. To ensure oracle efficiency, instead of computing finite-difference approximations at every iteration, we reuse … Read more

Progressively Sampled Equality-Constrained Optimization

An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more

A Fast Newton Method Under Local Lipschitz Smoothness

A new, fast second-order method is proposed that achieves the optimal \(\mathcal{O}\left(|\log(\epsilon)|\epsilon^{-3/2}\right) \) complexity to obtain first-order $\epsilon$-stationary points. Crucially, this is deduced without assuming the standard global Lipschitz Hessian continuity condition, but onlyusing an appropriate local smoothness requirement. The algorithm exploits Hessian information to compute a Newton step and a negative curvature step when … Read more

NonOpt: Nonconvex, Nonsmooth Optimizer

NonOpt, a C++ software package for minimizing locally Lipschitz objective functions, is presented. The software is intended primarily for minimizing objective functions that are nonconvex and/or nonsmooth. The package has implementations of two main algorithmic strategies: a gradient-sampling and a proximal-bundle method. Each algorithmic strategy can employ quasi-Newton techniques for accelerating convergence in practice. The … Read more

A Sound Local Regret Methodology for Online Nonconvex Composite Optimization

Online nonconvex optimization addresses dynamic and complex decision-making problems arising in real-world decision-making tasks where the optimizer’s objective evolves with the intricate and changing nature of the underlying system. This paper studies an online nonconvex composite optimization model with limited first-order access, encompassing a wide range of practical scenarios. We define local regret using a … Read more

An Augmented Lagrangian Approach to Bi-Level Optimization via an Equilibrium Constrained Problem

Optimization problems involving equilibrium constraints capture diverse optimization settings such as bi-level optimization, min-max problems and games, and the minimization over non-linear constraints. This paper introduces an Augmented Lagrangian approach with Hessian-vector product approximation to address an equilibrium constrained nonconvex nonsmooth optimization problem. The underlying model in particular captures various settings of bi-level optimization problems, … Read more

A proximal-perturbed Bregman ADMM for solving nonsmooth and nonconvex optimization problems

In this paper, we focus on a linearly constrained composite minimization problem whose objective function is possibly nonsmooth and nonconvex. Unlike the traditional construction of augmented Lagrangian function, we provide a proximal-perturbed augmented Lagrangian and then develop a new Bregman Alternating Direction Method of Multipliers (ADMM). Under mild assumptions, we show that the novel augmented … Read more

An adaptive relaxation-refinement scheme for multi-objective mixed-integer nonconvex optimization

In this work, we present an algorithm for computing an enclosure for multi-objective mixed-integer nonconvex optimization problems. In contrast to existing solvers for this type of problem, this algorithm is not based on a branch-and-bound scheme but rather relies on a relax-and-refine approach. While this is an established technique in single-objective optimization, several adaptions to … Read more

Unifying nonlinearly constrained nonconvex optimization

Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more