A four-operator splitting algorithm for nonconvex and nonsmooth optimization

\(\) In this work, we address a class of nonconvex nonsmooth optimization problems where the objective function is the sum of two smooth functions (one of which is proximable) and two nonsmooth functions (one weakly convex and proximable and the other continuous and weakly concave). We introduce a new splitting algorithm that extends the Davis-Yin … Read more

A Subgradient Projection Method with Outer Approximation for Solving Semidefinite Programming Problems

We explore the combination of subgradient projection with outer approximation to solve semidefinite programming problems. We compare several ways to construct outer approximations using the problem structure. The resulting approach enjoys the strengths of both subgradient projection and outer approximation methods. Preliminary computational results on the semidefinite programming relaxations of graph partitioning and max-cut show … Read more

Projection onto hyperbolicity cones and beyond: a dual Frank-Wolfe approach

We discuss the problem of projecting a point onto an arbitrary hyperbolicity cone from both theoretical and numerical perspectives. While hyperbolicity cones are furnished with a generalization of the notion of eigenvalues, obtaining closed form expressions for the projection operator as in the case of semidefinite matrices is an elusive endeavour. To address that we … Read more

Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization

\(\) This paper considers the problem of minimizing the sum of a smooth function and the Schatten-\(p\) norm of the matrix. Our contribution involves proposing accelerated iteratively reweighted nuclear norm methods designed for solving the nonconvex low-rank minimization problem. Two major novelties characterize our approach. Firstly, the proposed method possesses a rank identification property, enabling … Read more

Composite optimization models via proximal gradient method with increasing adaptive stepsizes

We first consider the convex composite optimization models without globally Lipschitz condition imposed on the gradient of the differentiable term. The classical method which is proximal gradient will be studied with our new strategy of stepsize selection. The idea for constructing such a stepsize is motivated by the one in \cite{hoai} that used for gradient … Read more

Accelerated Fully First-Order Methods for Bilevel and Minimax Optimization

We present in this paper novel accelerated fully first-order methods in \emph{Bilevel Optimization} (BiO). Firstly, for BiO under the assumption that the lower-level functions admit the typical strong convexity assumption, the \emph{(Perturbed) Restarted Accelerated Fully First-order methods for Bilevel Approximation} (\PRAFFBA) algorithm leveraging \emph{fully} first-order oracles is proposed, whereas the algorithm for finding approximate first-order … Read more

Recognizing weighted means in geodesic spaces

Geodesic metric spaces support a variety of averaging constructions for given finite sets. Computing such averages has generated extensive interest in diverse disciplines. Here we consider the inverse problem of recognizing computationally whether or not a given point is such an average, exactly or approximately. In nonpositively curved spaces, several averaging notions, including the usual … Read more

The Role of Level-Set Geometry on the Performance of PDHG for Conic Linear Optimization

We consider solving huge-scale instances of (convex) conic linear optimization problems, at the scale where matrix-factorization-free methods are attractive or necessary. The restarted primal-dual hybrid gradient method (rPDHG) — with heuristic enhancements and GPU implementation — has been very successful in solving huge-scale linear programming (LP) problems; however its application to more general conic convex … Read more

Lipschitz minimization and the Goldstein modulus

Goldstein’s 1977 idealized iteration for minimizing a Lipschitz objective fixes a distance – the step size – and relies on a certain approximate subgradient. That “Goldstein subgradient” is the shortest convex combination of objective gradients at points within that distance of the current iterate. A recent implementable Goldstein-style algorithm allows a remarkable complexity analysis (Zhang … Read more

Convex optimization on CAT(0) cubical complexes

We consider geodesically convex optimization problems involving distances to a finite set of points A in a CAT(0) cubical complex. Examples include the minimum enclosing ball problem, the weighted mean and median problems, and the feasibility and projection problems for intersecting balls with centers in A. We propose a decomposition approach relying on standard Euclidean … Read more