Primal-dual proximal bundle and conditional gradient methods for convex problems

This paper studies the primal-dual convergence and iteration-complexity of proximal bundle methods for solving nonsmooth problems with convex structures. More specifically, we develop a family of primal-dual proximal bundle methods for solving convex nonsmooth composite optimization problems and establish the iteration-complexity in terms of a primal-dual gap. We also propose a class of proximal bundle … Read more

Some Unified Theory for Variance Reduced Prox-Linear Methods

This work considers the nonconvex, nonsmooth problem of minimizing a composite objective of the form $f(g(x))+h(x)$ where the inner mapping $g$ is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear methods can enjoy variance-reduced speed-ups despite the existence of nonsmoothness. We provide a unified convergence theory applicable to a … Read more

A subgradient splitting algorithm for optimization on nonpositively curved metric spaces

Many of the primal ingredients of convex optimization extend naturally from Euclidean to Hadamard spaces — nonpositively curved metric spaces like Euclidean, Hilbert, and hyperbolic spaces, metric trees, and more general CAT(0) cubical complexes. Linear structure, however, and the duality theory it supports are absent. Nonetheless, we introduce a new type of subgradient for convex … Read more

An inertial Riemannian gradient ADMM for nonsmooth manifold optimization

The Alternating Direction Method of Multipliers (ADMM) is widely recognized for its efficiency in solving separable optimization problems. However, its application to optimization on Riemannian manifolds remains a significant challenge. In this paper, we propose a novel inertial Riemannian gradient ADMM (iRG-ADMM) to solve Riemannian optimization problems with nonlinear constraints. Our key contributions are as … Read more

How Stringent is the Linear Independence Kink Qualification in Abs-Smooth Optimization?

Abs-smooth functions are given by compositions of smooth functions and the evaluation of the absolute value. The linear independence kink qualification (LIKQ) is a fundamental assumption in optimization problems governed by these abs-smooth functions, generalizing the well-known LICQ from smooth optimization. In particular, provided that LIKQ holds it is possible to derive optimality conditions for … Read more

A proximal alternating direction method of multipliers with a proximal-perturbed Lagrangian function for nonconvex and nonsmooth structured optimization

Building on [J. Glob. Optim. 89 (2024) 899–926], we continue to focus on solving a nonconvex and nonsmooth structured optimization problem with linear and closed convex set constraints, where its objective function is the sum of a convex (possibly nonsmooth) function and a smooth (possibly nonconvex) function. Based on the traditional augmented Lagrangian construction, we … Read more

A proximal-perturbed Bregman ADMM for solving nonsmooth and nonconvex optimization problems

In this paper, we focus on a linearly constrained composite minimization problem whose objective function is possibly nonsmooth and nonconvex. Unlike the traditional construction of augmented Lagrangian function, we provide a proximal-perturbed augmented Lagrangian and then develop a new Bregman Alternating Direction Method of Multipliers (ADMM). Under mild assumptions, we show that the novel augmented … Read more

An inexact ADMM for separable nonconvex and nonsmooth optimization

An Inexact Alternating Direction Method of Multiplies (I-ADMM) with an expansion linesearch step was developed for solving a family of separable minimization problems subject to linear constraints, where the objective function is the sum of a smooth but possibly nonconvex function and a possibly nonsmooth nonconvex function. Global convergence and linear convergence rate of the … Read more

Global non-asymptotic super-linear convergence rates of regularized proximal quasi-Newton methods on non-smooth composite problems

In this paper, we propose two regularized proximal quasi-Newton methods with symmetric rank-1 update of the metric (SR1 quasi-Newton) to solve non-smooth convex additive composite problems. Both algorithms avoid using line search or other trust region strategies. For each of them, we prove a super-linear convergence rate that is independent of the initialization of the … Read more

Adaptive Algorithms for Robust Phase Retrieval

This paper considers the robust phase retrieval, which can be cast as a nonsmooth and nonconvex optimization problem. We propose two first-order algorithms with adaptive step sizes: the subgradient algorithm (AdaSubGrad) and the inexact proximal linear algorithm (AdaIPL). Our contribution lies in the novel design of adaptive step sizes based on quantiles of the absolute … Read more