Inexact FISTA-like Methods with Adaptive Backtracking

Accelerated proximal gradient methods have become a useful tool in large-scale convex optimization, specially for variational regularization with non-smooth priors. Prevailing convergence analysis considers that users can perform the proximal and the gradient steps exactly. Still, in some practical applications, the proximal or the gradient steps must be computed inexactly, which can harm convergence speed … Read more

Performance Estimation for Smooth and Strongly Convex Sets

We extend recent computer-assisted design and analysis techniques for first-order optimization over structured functions–known as performance estimation–to apply to structured sets. We prove “interpolation theorems” for smooth and strongly convex sets with Slater points and bounded diameter, showing a wide range of extremal questions amount to structured mathematical programs. Prior function interpolation theorems are recovered … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more

Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we consider a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. A basic double-proximal augmented Lagrangian method (DP-ALM) is developed, which enjoys a flexible dual stepsize and a proximal subproblem with relatively smaller proximal parameter. By a novel prediction-correction reformulation for the proposed DP-ALM and by … Read more

Scalable Projection-Free Optimization Methods via MultiRadial Duality Theory

Recent works have developed new projection-free first-order methods based on utilizing linesearches and normal vector computations to maintain feasibility. These oracles can be cheaper than orthogonal projection or linear optimization subroutines but have the drawback of requiring a known strictly feasible point to do these linesearches with respect to. In this work, we develop new … Read more

Accelerated Gradient Dynamics on Riemannian Manifolds: Faster Rate and Trajectory Convergence

In order to minimize a differentiable geodesically convex function, we study a second-order dynamical system on Riemannian manifolds with an asymptotically vanishing damping term of the form \(\alpha/t\). For positive values of \(\alpha\), convergence rates for the objective values and convergence of trajectory is derived. We emphasize the crucial role of the curvature of the … Read more

Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Self-concordant Smoothing for Large-Scale Convex Composite Optimization

We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other may be nonsmooth. The key highlight of our approach is in a natural property of the resulting problem’s structure which provides us with a variable-metric selection method and a step-length selection rule … Read more

Information Complexity of Mixed-integer Convex Optimization

We investigate the information complexity of mixed-integer convex optimization under different types of oracles. We establish new lower bounds for the standard first-order oracle, improving upon the previous best known lower bound. This leaves only a lower order linear term (in the dimension) as the gap between the lower and upper bounds. This is derived … Read more

Exact convergence rate of the last iterate in subgradient methods

We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each … Read more