Polyhedral approximations of the semidefinite cone and their application

We develop techniques to construct a series of sparse polyhedral approximations of the semidefinite cone. Motivated by the semidefinite (SD) bases proposed by Tanaka and Yoshise (2018), we propose a simple expansion of SD bases so as to keep the sparsity of the matrices composing it. We prove that the polyhedral approximation using our expanded … Read more

A Framework for Solving Chance-Constrained Linear Matrix Inequality Programs

We propose a novel partial sample average approximation (PSAA) framework to solve the two main types of chance-constrained linear matrix inequality (CCLMI) problems: CCLMI with random technology matrix, and CCLMI with random right-hand side. We propose a series of computationally tractable PSAA-based approximations for CCLMI problems, analyze their properties, and derive sufficient conditions ensuring convexity. … Read more

Non-Stationary First-Order Primal-Dual Algorithms with Fast Convergence Rates

In this paper, we propose two novel non-stationary first-order primal-dual algorithms to solve nonsmooth composite convex optimization problems. Unlike existing primal-dual schemes where the parameters are often fixed, our methods use pre-defined and dynamic sequences for parameters. We prove that our first algorithm can achieve O(1/k) convergence rate on the primal-dual gap, and primal and … Read more

Stability Analysis for a Class of Sparse Optimization Problems

The sparse optimization problems arise in many areas of science and engineering, such as compressed sensing, image processing, statistical and machine learning. The $\ell_{0}$-minimization problem is one of such optimization problems, which is typically used to deal with signal recovery. The $\ell_{1}$-minimization method is one of the plausible approaches for solving the $\ell_{0}$-minimization problems, and … Read more

Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives

In this paper we study p-order methods for unconstrained minimization of convex functions that are p-times differentiable with $\nu$-Hölder continuous pth derivatives. We propose tensor schemes with and without acceleration. For the schemes without acceleration, we establish iteration complexity bounds of $\mathcal{O}\left(\epsilon^{-1/(p+\nu-1)}\right)$ for reducing the functional residual below a given $\epsilon\in (0,1)$. Assuming that $\nu$ … Read more

An analysis of noise folding for low-rank matrix recovery

Previous work regarding low-rank matrix recovery has concentrated on the scenarios in which the matrix is noise-free and the measurements are corrupted by noise. However, in practical application, the matrix itself is usually perturbed by random noise preceding to measurement. This paper concisely investigates this scenario and evidences that, for most measurement schemes utilized in … Read more

A convex relaxation to compute the nearest structured rank deficient matrix

Given an affine space of matrices L and a matrix \theta in L, consider the problem of finding the closest rank deficient matrix to \theta on L with respect to the Frobenius norm. This is a nonconvex problem with several applications in estimation problems. We introduce a novel semidefinite programming (SDP) relaxation, and we show … Read more

A linearly convergent stochastic recursive gradient method for convex optimization

The stochastic recursive gradient algorithm (SARAH) [8] attracts much interest recently. It admits a simple recursive framework for updating stochastic gradient estimates. Motivated by this, in this paper, we propose a SARAH-I method incorporating importance sampling, whose linear conver- gence rate of the sequence of distances between iterates and the optima set is proven under … Read more

General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we … Read more

Scalable Preconditioning of Block-Structured Linear Algebra Systems using ADMM

We study the solution of block-structured linear algebra systems arising in optimization by using iterative solution techniques. These systems are the core computational bottleneck of many problems of interest such as parameter estimation, optimal control, network optimization, and stochastic programming. Our approach uses a Krylov solver (GMRES) that is preconditioned with an alternating method of … Read more