Finding approximately rank-one submatrices with the nuclear norm and l1 norm

We propose a convex optimization formulation with the nuclear norm and $\ell_1$-norm to find a large approximately rank-one submatrix of a given nonnegative matrix. We develop optimality conditions for the formulation and characterize the properties of the optimal solutions. We establish conditions under which the optimal solution of the convex formulation has a specific sparse … Read more

On the Lasserre hierarchy of semidefinite programming relaxations of convex polynomial optimization problems

The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J.B. Lasserre. Convexity in semialgebraic geometry and polynomial optimization. SIAM J. Optim. 19, 1995-2014, 2009.] We give a new proof of the finite convergence property, that does not require the assumption that the Hessian of … Read more

Bundle-type methods uniformly optimal for smooth and nonsmooth convex optimization

The bundle-level method and their certain variants are known to exhibit an optimal rate of convergence, i.e., ${\cal O}(1/\sqrt{t})$, and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., ${\cal O}(1/t^2)$. In this paper, … Read more

Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy … Read more

Convergence analysis of primal-dual algorithms for total variation image restoration

Recently, some attractive primal-dual algorithms have been proposed for solving a saddle-point problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primal-dual algorithms and shows that the involved parameters of those primal-dual algorithms (including the step sizes) can be significantly enlarged if … Read more

Symmetric tensor approximation hierarchies for the completely positive cone

In this paper we construct two approximation hierarchies for the completely positive cone based on symmetric tensors. We show that one hierarchy corresponds to dual cones of a known polyhedral approximation hierarchy for the copositive cone, and the other hierarchy corresponds to dual cones of a known semidefinite approximation hierarchy for the copositive cone. As … Read more

The Inexact Spectral Bundle Method for Convex Quadratic Semidefinite Programming

We present an inexact spectral bundle method for solving convex quadratic semidefinite optimization problems. This method is a first-order method, hence requires much less computational cost each iteration than second-order approaches such as interior-point methods. In each iteration of our method, we solve an eigenvalue minimization problem inexactly, and solve a small convex quadratic semidefinite … Read more

NONSMOOTH OPTIMIZATION OVER THE (WEAKLY OR PROPERLY) PARETO SET OF A LINEAR-QUADRATIC MULTI-OBJECTIVE CONTROL PROBLEM : EXPLICIT OPTIMALITY CONDITIONS

We present explicit optimality conditions for a nonsmooth functional defined over the (properly or weakly) Pareto set associated to a multiobjective linear-quadratic control problem. This problem is very difficult even in a finite dimensional setting, i.e. when, instead of a control problem, we deal with a mathematical programming problem. Amongst different applications, our problem may … Read more

On the acceleration of augmented Lagrangian method for linearly constrained optimization

The classical augmented Lagrangian method (ALM) plays a fundamental role in algorithmic development of constrained optimization. In this paper, we mainly show that Nesterov’s influential acceleration techniques can be applied to accelerate ALM, thus yielding an accelerated ALM whose iteration-complexity is O(1/k^2) for linearly constrained convex programming. As a by-product, we also show easily that … Read more

An Introduction to a Class of Matrix Cone Programming

In this paper, we define a class of linear conic programming (which we call matrix cone programming or MCP) involving the epigraphs of five commonly used matrix norms and the well studied symmetric cone. MCP has recently found many important applications, for example, in nuclear norm relaxations of affine rank minimization problems. In order to … Read more