Classification with Guaranteed Probability of Error

We introduce a general-purpose learning machine that we call the “Guaranteed Error Machine”, or GEM, and two learning algorithms, a “real GEM algorithm” and an “ideal GEM algorithm”. The real GEM algorithm is for use in real applications, while the ideal GEM algorithm is introduced as a theoretical tool; however, these two algorithms have identical … Read more

Simultaneously solving seven optimization problems in relative scale

In this paper we develop and analyze an efficient algorithm which solves seven related optimization problems simultaneously, in relative scale. Each iteration of our method is very cheap, with main work spent on matrix-vector multiplication. We prove that if a certain sequence generated by the algorithm remains bounded, then the method must terminate in $O(1/\delta)$ … Read more

Adaptive First-Order Methods for General Sparse Inverse Covariance Selection

In this paper, we consider estimating sparse inverse covariance of a Gaussian graphical model whose conditional independence is assumed to be partially known. Similarly as in [5], we formulate it as an $l_1$-norm penalized maximum likelihood estimation problem. Further, we propose an algorithm framework, and develop two first-order methods, that is, adaptive spectral projected gradient … Read more

On Verifiable Sufficient Conditions for Sparse Signal Recovery via L1 Minimization

We propose novel necessary and sufficient conditions for a sensing matrix to be “s-good” — to allow for exact L1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect L1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding … Read more

Gradient based method for cone programming with application to large-scale compressed sensing

In this paper, we study a gradient based method for general cone programming (CP) problems. In particular, we first consider four natural primal-dual convex smooth minimization reformulations for them, and then discuss a variant of Nesterov’s smooth (VNS) method recently proposed by Tseng [30] for solving these reformulations. The associated worst-case major arithmetic operations costs … Read more

An Analysis of Weighted Least Squares Method and Layered Least Squares Method with the Basis Block Lower Triangular Matrix Form

In this paper, we analyze the limiting behavior of the weighted least squares problem $\min_{x\in\Re^n}\sum_{i=1}^p\|D_i(A_ix-b_i)\|^2$, where each $D_i$ is a positive definite diagonal matrix. We consider the situation where the magnitude of the weights are drastically different block-wisely so that $\max(D_1)\geq\min(D_1) \gg \max(D_2) \geq \min(D_2) \gg \max(D_3) \geq \ldots \gg \max(D_{p-1}) \geq \min(D_{p-1}) \gg \max(D_p)$. … Read more

Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression

In this paper, we study convex optimization methods for computing the trace norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. [17], conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and … Read more

LASSO-Patternsearch Algorithm with Application to Ophthalmology and Genomic Data

The LASSO-Patternsearch algorithm is proposed as a two-step method to identify clusters or patterns of multiple risk factors for outcomes of interest in demographic and genomic studies. The predictor variables are dichotomous or can be coded as dichotomous. Many diseases are suspected of having multiple interacting risk factors acting in concert, and it is of … Read more

Constraint Orbital Branching

Orbital branching is a method for branching on variables in integer programming that reduces the likelihood of evaluating redundant, isomorphic nodes in the branch-and-bound procedure. In this work, the orbital branching methodology is extended so that the branching disjunction can be based on an arbitrary constraint. Many important families of integer programs are structured such … Read more

l_1 Trend Filtering

The problem of estimating underlying trends in time series data arises in a variety of disciplines. In this paper we propose a variation on Hodrick-Prescott (H-P) filtering, a widely used method for trend estimation. The proposed l_1 trend filtering method substitutes a sum of absolute values (i.e., l_1-norm) for the sum of squares used in … Read more