A Parallel Line Search Subspace Correction Method for Composite Convex Optimization

In this paper, we investigate a parallel subspace correction framework for composite convex optimization. The variables are first divided into a few blocks based on certain rules. At each iteration, the algorithms solve a suitable subproblem on each block simultaneously, construct a search direction by combining their solutions on all blocks, then identify a new … Read more

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

We propose a forward-backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. The sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka-Lojasiewicz inequality, which is … Read more

Weak and Strong Superiorization: Between Feasibility-Seeking and Minimization

We review the superiorization methodology, which can be thought of, in some cases, as lying between feasibility-seeking and constrained minimization. It is not quite trying to solve the full fledged constrained minimization problem; rather, the task is to find a feasible point which is superior (with respect to an objective function value) to one returned … Read more

Douglas-Rachford splitting for nonconvex feasibility problems

We adapt the Douglas-Rachford (DR) splitting method to solve nonconvex feasibility problems by studying this method for a class of nonconvex optimization problem. While the convergence properties of the method for convex problems have been well studied, far less is known in the nonconvex setting. In this paper, for the direct adaptation of the method … Read more

Randomized First-order Methods for Saddle Point Optimization

In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a … Read more

A Proximal Multiplier Method for Convex Separable Symmetric Cone Optimization

This work is devoted to the study of a proximal decomposition algorithm for solving convex symmetric cone optimization with separable structures. The algorithm considered is based on the decomposition method proposed by Chen and Teboulle (1994), and the proximal generalized distance defined by Auslender and Teboulle (2006). Under suitable assumptions, first a class of proximal … Read more

Relative Entropy Relaxations for Signomial Optimization

Signomial programs (SPs) are optimization problems specified in terms of signomials, which are weighted sums of exponentials composed with linear functionals of a decision variable. SPs are non convex optimization problems in general, and families of NP-hard problems can be reduced to SPs. In this paper we describe a hierarchy of convex relaxations to obtain … Read more

A Gentle, Geometric Introduction to Copositive Optimization

This paper illustrates the fundamental connection between nonconvex quadratic optimization and copositive optimization—a connection that allows the reformulation of nonconvex quadratic problems as convex ones in a unified way. We intend the paper for readers new to the area, and hence the exposition is largely self-contained. We focus on examples having just a few variables … Read more

Block-wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-block Convex Programming

We consider the linearly constrained convex minimization model with a separable objective function which is the sum of m functions without coupled variables, and discuss how to design an efficient algorithm based on the fundamental technique of splitting the augmented Lagrangian method (ALM). Our focus is the specific big-data scenario where m is huge. A … Read more

Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An … Read more