Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more

Accessible Theoretical Complexity of the Restarted Primal-Dual Hybrid Gradient Method for Linear Programs with Unique Optima

The restarted primal-dual hybrid gradient method (rPDHG) has recently emerged as an important tool for solving large-scale linear programs (LPs). For LPs with unique optima, we present an iteration bound of \(\widetilde{O}\left(\kappa\Phi\cdot\ln\left(\frac{\|w^*\|}{\varepsilon}\right)\right)\), where \(\varepsilon\) is the target tolerance, \(\kappa\) is the standard matrix condition number, \(\|w^*\|\) is the norm of the optimal solution, and \(\Phi\) … Read more

Dual Spectral Projected Gradient Method for Generalized Log-det Semidefinite Programming

Log-det semidefinite programming (SDP) problems are optimization problems that often arise from Gaussian graphic models. A log-det SDP problem with an l1-norm term has been examined in many methods, and the dual spectral projected gradient (DSPG) method by Nakagaki et al.~in 2020 is designed to efficiently solve the dual problem of the log-det SDP by … Read more

An inertial projective splitting method for the sum of two maximal monotone operators

We propose a projective splitting type method to solve the problem of finding a zero of the sum of two maximal monotone operators. Our method considers inertial and relaxation steps, and also allows inexact solutions of the proximal subproblems within a relative-error criterion.We study the asymptotic convergence of the method, as well as its iteration-complexity. … Read more

Second-Order Contingent Derivatives: Computation and Application

It is known that second-order (Studniarski) contingent derivatives can be used to compute tangents to the solution set of a generalized equation when standard (first-order) regularity conditions are absent, but relaxed (second-order) regularity conditions are fulfilled. This fact, roughly speaking, is only relevant in practice as long as the computation of second-order contingent derivatives itself … Read more

Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we propose a novel double-proximal augmented Lagrangian method(DP-ALM) for solving a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. This DP-ALM not only enjoys a flexible dual stepsize, but also contains a proximal subproblem with relatively smaller proximal parameter. By a new prediction-correction reformulation for this … Read more

Universal subgradient and proximal bundle methods for convex and strongly convex hybrid composite optimization

This paper develops two parameter-free methods for solving convex and strongly convex hybrid composite optimization problems, namely, a composite subgradient type method and a proximal bundle type method. Both functional and stationary complexity bounds for the two methods are established in terms of the unknown strong convexity parameter. To the best of our knowledge, the … Read more

Generalized Ellipsoids

We introduce a family of symmetric convex bodies called generalized ellipsoids of degree \(d\) (GE-\(d\)s), with ellipsoids corresponding to the case of \(d=0\). Generalized ellipsoids (GEs) retain many geometric, algebraic, and algorithmic properties of ellipsoids. We show that the conditions that the parameters of a GE must satisfy can be checked in strongly polynomial time, … Read more

An Adaptive Proximal ADMM for Nonconvex Linearly Constrained Composite Programs

This paper develops an adaptive proximal alternating direction method of multipliers (ADMM) for solving linearly constrained, composite optimization problems under the assumption that the smooth component of the objective is weakly convex, while the non-smooth component is convex and block-separable.  The proposed method is adaptive to all problem parameters, including smoothness and weak convexity constants, … Read more

Black-box Optimization Algorithms for Regularized Least-squares Problems

We consider the problem of optimizing the sum of a smooth, nonconvex function for which derivatives are unavailable, and a convex, nonsmooth function with easy-to-evaluate proximal operator. Of particular focus is the case where the smooth part has a nonlinear least-squares structure. We adapt two existing approaches for derivative-free optimization of nonsmooth compositions of smooth … Read more