Second-Order Contingent Derivatives: Computation and Application

It is known that second-order (Studniarski) contingent derivatives can be used to compute tangents to the solution set of a generalized equation when standard (first-order) regularity conditions are absent, but relaxed (second-order) regularity conditions are fulfilled. This fact, roughly speaking, is only relevant in practice as long as the computation of second-order contingent derivatives itself … Read more

Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we propose a novel double-proximal augmented Lagrangian method(DP-ALM) for solving a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. This DP-ALM not only enjoys a flexible dual stepsize, but also contains a proximal subproblem with relatively smaller proximal parameter. By a new prediction-correction reformulation for this … Read more

Universal subgradient and proximal bundle methods for convex and strongly convex hybrid composite optimization

This paper develops two parameter-free methods for solving convex and strongly convex hybrid composite optimization problems, namely, a composite subgradient type method and a proximal bundle type method. Both functional and stationary complexity bounds for the two methods are established in terms of the unknown strong convexity parameter. To the best of our knowledge, the … Read more

Generalized Ellipsoids

We introduce a family of symmetric convex bodies called generalized ellipsoids of degree \(d\) (GE-\(d\)s), with ellipsoids corresponding to the case of \(d=0\). Generalized ellipsoids (GEs) retain many geometric, algebraic, and algorithmic properties of ellipsoids. We show that the conditions that the parameters of a GE must satisfy can be checked in strongly polynomial time, … Read more

An Adaptive Proximal ADMM for Nonconvex Linearly Constrained Composite Programs

This paper develops an adaptive proximal alternating direction method of multipliers (ADMM) for solving linearly constrained, composite optimization problems under the assumption that the smooth component of the objective is weakly convex, while the non-smooth component is convex and block-separable.  The proposed method is adaptive to all problem parameters, including smoothness and weak convexity constants, … Read more

Black-box Optimization Algorithms for Regularized Least-squares Problems

We consider the problem of optimizing the sum of a smooth, nonconvex function for which derivatives are unavailable, and a convex, nonsmooth function with easy-to-evaluate proximal operator. Of particular focus is the case where the smooth part has a nonlinear least-squares structure. We adapt two existing approaches for derivative-free optimization of nonsmooth compositions of smooth … Read more

On the strength of Burer’s lifted convex relaxation to quadratic programming with ball constraints

We study quadratic programs with m ball constraints, and the strength of a lifted convex relaxation for it recently proposed by Burer (2024). Burer shows this relaxation is exact when m=2. For general m, Burer (2024) provides numerical evidence that this lifted relaxation is tighter than the Kronecker product based Reformulation Linearization Technique (RLT) inequalities … Read more

A combinatorial approach to Ramana’s exact dual for semidefinite programming

Thirty years ago, in a seminal paper Ramana derived an exact dual for Semidefinite Programming (SDP). Ramana’s dual has the following remarkable features: i) it assumes feasibility of the primal, but it does not make any regularity assumptions, such as strict feasibility ii) its optimal value is the same as the optimal value of the … Read more

Concrete convergence rates for common fixed point problems under Karamata regularity

We introduce the notion of Karamata regular operators, which is a notion of regularity that is suitable for obtaining concrete convergence rates for common fixed point problems. This provides a broad framework that includes, but goes beyond, Hölderian error bounds and Hölder regular operators. By concrete, we mean that the rates we obtain are explicitly … Read more

A progressive decoupling algorithm for minimizing the difference of convex and weakly convex functions over a linear subspace

Commonly, decomposition and splitting techniques for optimization problems strongly depend on convexity. Implementable splitting methods for nonconvex and nonsmooth optimization problems are scarce and often lack convergence guarantees. Among the few exceptions is the Progressive Decoupling Algorithm (PDA), which has local convergence should convexity be elicitable. In this work, we furnish PDA with a descent … Read more