SINCO – a greedy coordinate ascent method for sparse inverse covariance selection problem

In this paper, we consider the sparse inverse covariance selection problem which is equivalent to structure recovery of a Markov Network over Gaussian variables. We introduce a simple but efficient greedy algorithm, called {\em SINCO}, for solving the Sparse INverse COvariance problem. Our approach is based on coordinate ascent method which naturally preserves the sparsity … Read more

Facial reduction algorithms for conic optimization problems

To obtain a primal-dual pair of conic programming problems having zero duality gap, two methods have been proposed: the facial reduction algorithm due to Borwein and Wolkowicz [1,2] and the conic expansion method due to Luo, Sturm, and Zhang [5]. We establish a clear relationship between them. Our results show that although the two methods … Read more

Composite Proximal Bundle Method

We consider minimization of nonsmooth functions which can be represented as the composition of a positively homogeneous convex function and a smooth mapping. This is a sufficiently rich class that includes max-functions, largest eigenvalue functions, and norm-1 regularized functions. The bundle method uses an oracle that is able to compute separately the function and subgradient … Read more

The mesh adaptive direct search algorithm for periodic variables

This work analyzes constrained black box optimization in which the functions defining the problem are periodic with respect to some or all the variables. We show that the natural strategy of mapping trial points into the interval defined by the period in the Mesh Adaptive Direct Search (MADS) framework can be easily done in practice, … Read more

A Note on the Behavior of the Randomized Kaczmarz Algorithm of Strohmer and Vershynin

In a recent paper by Strohmer and Vershynin (J. Fourier Anal. Appl. 15:262–278, 2009) a “randomized Kaczmarz algorithm” is proposed for solving consistent systems of linear equations {ai, x = bi }m i=1. In that algorithm the next equation to be used in an iterative Kaczmarz process is selected with a probability proportional to ai2. … Read more

An Implementable Proximal Point Algorithmic Framework for Nuclear Norm Minimization

The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact … Read more

An Augmented Lagrangian Approach for Sparse Principal Component Analysis

Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To … Read more

The Farkas Lemma Revisited

The Farkas Lemma is extended to simultaneous linear operator and polyhedral sublinear operator inequalities by Boolean valued analysis. CitationSobolev Institute of Mathematics, Novosibirsk, 630090 RussiaArticleDownload View PDF

About Stationarity and Regularity in Variational Analysis

Stationarity and regularity concepts for the three typical for variational analysis classes of objects — real-valued functions, collections of sets, and multifunctions — are investigated. An attempt is maid to present a classification scheme for such concepts and to show that properties introduced for objects from different classes can be treated in a similar way. … Read more

Trace Norm Regularization: Reformulations, Algorithms, and Multi-task Learning

We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal and dual reformulations of … Read more