Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical $M$-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension $\pdim$ to grow with (and possibly exceed) … Read more

Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $\order(\pdim/T)$ convergence rate for strongly convex objectives in $\pdim$ dimensions, and an $\order(\sqrt{(\spindex \log \pdim)/T})$ convergence rate when … Read more

Correlative Sparsity Structures and Semidefinite Relaxations for Concave Cost Transportation Problems with Change of Variables

We present a hierarchy of semidefinite programming (SDP) relaxations for solving the concave cost transportation problem (CCTP), which is known to be NP-hard, with $p$ suppliers and $q$ demanders. In particular, we study cases in which the cost function is quadratic or square-root concave. The key idea of our relaxation methods is in the change … Read more

Tightened L0 Relaxation Penalties for Classification

In optimization-based classification model selection, for example when using linear programming formulations, a standard approach is to penalize the L1 norm of some linear functional in order to select sparse models. Instead, we propose a novel integer linear program for sparse classifier selection, generalizing the minimum disagreement hyperplane problem whose complexity has been investigated in … Read more

Rank-Sparsity Incoherence for Matrix Decomposition

Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is NP-hard in general. In this … Read more

Approximating Hessians in multilevel unconstrained optimization

We consider Hessian approximation schemes for large-scale multilevel unconstrained optimization problems, which typically present a sparsity and partial separability structure. This allows iterative quasi-Newton methods to solve them despite of their size. Structured finite-difference methods and updating schemes based on the secant equation are presented and compared numerically inside the multilevel trust-region algorithm proposed by … Read more

Exploiting Sparsity in SDP Relaxation for Sensor Network Localization

A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalent to the sparse SDP relaxation by Waki {¥it et al.} with relaxation order 1, except the … Read more

A Numerical Algorithm for Block-Diagonal Decomposition of Matrix *-Algebras, Part I: Proposed Approach and Application to Semidefinite Programming

Motivated by recent interest in group-symmetry in semidefinite programming, we propose a numerical method for finding a finest simultaneous block-diagonalization of a finite number of matrices, or equivalently the irreducible decomposition of the generated matrix *-algebra. The method is composed of numerical-linear algebraic computations such as eigenvalue computation, and automatically makes full use of the … Read more

Recognizing Underlying Sparsity in Optimization

Exploiting sparsity is essential to improve the efficiency of solving large optimization problems. We present a method for recognizing the underlying sparsity structure of a nonlinear partially separable problem, and show how the sparsity of the Hessian matrices of the problem’s functions can be improved by performing a nonsingular linear transformation in the space corresponding … Read more

Using Partial Separability of Functions in Generating Set Search Methods for Unconstrained Optimisation

Generating set Search Methods (GSS), a class of derivative-free methods for unconstrained optimisation, are in general robust but converge slowly. It has been shown that the performance of these methods can be enhanced by utilising accumulated information about the objective function as well as a priori knowledge such as partial separability. This paper introduces a … Read more