A Parallel Inertial Proximal Optimization Method

The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of … Read more

First order optimality conditions for mathematical programs with semidefinite cone complementarity constraints

In this paper we consider a mathematical program with semidefinite cone complementarity constraints (SDCMPCC). Such a problem is a matrix analogue of the mathematical program with (vector) complementarity constraints (MPCC) and includes MPCC as a special case. We derive explicit expressions for the strong-, Mordukhovich- and Clarke- (S-, M- and C-)stationary conditions and give constraint … Read more

A contraction method with implementable proximal regularization for linearly constrained convex programming

The proximal point algorithm (PPA) is classical, and it is implicit in the sense that the resulting proximal subproblems may be as difficult as the original problem. In this paper, we show that with appropriate choices of proximal parameters, the application of PPA to the linearly constrained convex programming can result in easy proximal subproblems. … Read more

Efficient Block-coordinate Descent Algorithms for the Group Lasso

We present two algorithms to solve the Group Lasso problem [Yuan & Lin]. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem. We show that it exhibits excellent performance when the groups are of moderate sizes. For large … Read more

Finding approximately rank-one submatrices with the nuclear norm and l1 norm

We propose a convex optimization formulation with the nuclear norm and $\ell_1$-norm to find a large approximately rank-one submatrix of a given nonnegative matrix. We develop optimality conditions for the formulation and characterize the properties of the optimal solutions. We establish conditions under which the optimal solution of the convex formulation has a specific sparse … Read more

On the Lasserre hierarchy of semidefinite programming relaxations of convex polynomial optimization problems

The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J.B. Lasserre. Convexity in semialgebraic geometry and polynomial optimization. SIAM J. Optim. 19, 1995-2014, 2009.] We give a new proof of the finite convergence property, that does not require the assumption that the Hessian of … Read more

Bundle-type methods uniformly optimal for smooth and nonsmooth convex optimization

The bundle-level method and their certain variants are known to exhibit an optimal rate of convergence, i.e., ${\cal O}(1/\sqrt{t})$, and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., ${\cal O}(1/t^2)$. In this paper, … Read more

Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy … Read more

Convergence analysis of primal-dual algorithms for total variation image restoration

Recently, some attractive primal-dual algorithms have been proposed for solving a saddle-point problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primal-dual algorithms and shows that the involved parameters of those primal-dual algorithms (including the step sizes) can be significantly enlarged if … Read more