Interior Point Methods for Computing Optimal Designs

In this paper we study interior point (IP) methods for solving optimal design problems. In particular, we propose a primal IP method for solving the problems with general convex optimality criteria and establish its global convergence. In addition, we reformulate the problems with A-, D- and E-criterion into linear or log-determinant semidefinite programs (SDPs) and … Read more

Penalty Decomposition Methods for Rank Minimization

In this paper we consider general rank minimization problems with rank appearing in either objective function or constraint. We first show that a class of matrix optimization problems can be solved as lower dimensional vector optimization problems. As a consequence, we establish that a class of rank minimization problems have closed form solutions. Using this … Read more

Penalty Decomposition Methods for hBcNorm Minimization

In this paper we consider general l0-norm minimization problems, that is, the problems with l0-norm appearing in either objective function or constraint. In particular, we first reformulate the l0-norm constrained problem as an equivalent rank minimization problem and then apply the penalty decomposition (PD) method proposed in [33] to solve the latter problem. By utilizing … Read more

An Augmented Lagrangian Approach for Sparse Principal Component Analysis

Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To … Read more

Adaptive First-Order Methods for General Sparse Inverse Covariance Selection

In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical model whose conditional independence is assumed to be partially known. Similarly as in [5], we formulate it as an $l_1$-norm penalized maximum likelihood estimation problem. Further, we propose an algorithm framework, and develop two first-order methods, that is, adaptive spectral projected gradient … Read more

Gradient based method for cone programming with application to large-scale compressed sensing

In this paper, we study a gradient based method for general cone programming (CP) problems. In particular, we first consider four natural primal-dual convex smooth minimization reformulations for them, and then discuss a variant of Nesterov’s smooth (VNS) method recently proposed by Tseng [30] for solving these reformulations. The associated worst-case major arithmetic operations costs … Read more

Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [17], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and … Read more

Smooth Optimization Approach for Covariance Selection

In this paper we study a smooth optimization approach for solving a class of non-smooth {\it strongly} concave maximization problems. In particular, we apply Nesterov’s smooth optimization technique \cite{Nest83-1,Nest05-1} to their dual counterparts that are smooth convex problems. It is shown that the resulting approach has $\cO(1/{\sqrt{\epsilon}})$ iteration complexity for finding an $\epsilon$-optimal solution to … Read more

A New Cone Programming Approach for Robust Portfolio Selection

The robust portfolio selection problems have recently been studied by several researchers (e.g., see \cite{GoIy03,ErGoIy04,HaTu04,TuKo04}). In their work, the “separable” uncertainty sets of the problem parameters (e.g., mean and covariance of the random returns) were considered. These uncertainty sets share two common drawbacks: i) the actual confidence level of the uncertainty set is unknown, and … Read more

Primal-dual first-order methods with ${\cal O}(1/\epsilon)$ iteration-complexity for cone programming

In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss first-order methods suitable for solving these reformulations, namely, Nesterov’s optimal method \cite{Nest83-1,Nest05-1}, Nesterov’s smooth approximation scheme \cite{Nest05-1}, and Nemirovski’s prox-method \cite{Nem05-1}, and propose a variant of Nesterov’s optimal method which has … Read more