First-order penalty methods for bilevel optimization

\(\) In this paper we study a class of unconstrained and constrained bilevel optimization problems in which the lower-level part is a convex optimization problem, while the upper-level part is possibly a nonconvex optimization problem. In particular, we propose penalty methods for solving them, whose subproblems turn out to be a structured minimax problem and … Read more

Fixed-Point Automatic Differentiation of Forward–Backward Splitting Algorithms for Partly Smooth Functions

A large class of non-smooth practical optimization problems can be written as minimization of a sum of smooth and partly smooth functions. We consider such structured problems which also depend on a parameter vector and study the problem of differentiating its solution mapping with respect to the parameter which has far reaching applications in sensitivity … Read more

Parametric complexity analysis for a class of first-order Adagrad-like algorithms

A class of algorithms for optimization in the presence of noise is presented, that does not require the evaluation of the objective function. This class generalizes the well-known Adagrad method. The complexity of this class is then analyzed as a function of its parameters, and it is shown that some methods of the class enjoy … Read more

First-Order Objective-Function-Free Optimization Algorithms and Their Complexity

A class of algorithms for unconstrained nonconvex optimization is considered where the value of the objective function is never computed. The class contains a deterministic version of the first-order Adagrad method typically used for minimization of noisy function, but also allows the use of second-order information when available. The rate of convergence of methods in … Read more

Global Complexity Bound of a Proximal ADMM for Linearly-Constrained Nonseperable Nonconvex Composite Programming

This paper proposes and analyzes a dampened proximal alternating direction method of multipliers (DP.ADMM) for solving linearly-constrained nonconvex optimization problems where the smooth part of the objective function is nonseparable. Each iteration of DP.ADMM consists of: (ii) a sequence of partial proximal augmented Lagrangian (AL) updates, (ii) an under-relaxed Lagrange multiplier update, and (iii) a … Read more

Frank-Wolfe and friends: a journey into projection-free first-order optimization methods

Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank-Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility … Read more

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized … Read more

Average Curvature FISTA for Nonconvex Smooth Composite Optimization Problems

A previous authors’ paper introduces an accelerated composite gradient (ACG) variant, namely AC-ACG, for solving nonconvex smooth composite optimization (N-SCO) problems. In contrast to other ACG variants, AC-ACG estimates the local upper curvature of the N-SCO problem by using the average of the observed upper-Lipschitz curvatures obtained during the previous iterations, and uses this estimation … Read more

FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients

We present FrankWolfe.jl, an open-source implementation of several popular Frank-Wolfe and Conditional Gradients variants for first-order constrained optimization. The package is designed with flexibility and high-performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and interfaces smoothly with generic linear optimization … Read more

Universal Conditional Gradient Sliding for Convex Optimization

In this paper, we present a first-order projection-free method, namely, the universal conditional gradient sliding (UCGS) method, for solving ε-approximate solutions to convex differentiable optimization problems. For objective functions with Hölder continuous gradients, we show that UCGS is able to terminate with ε-solutions with at most O((1/ε)^(2/(1+3v))) gradient evaluations and O((1/ε)^(4/(1+3v))) linear objective optimizations, where … Read more