High-Order Evaluation Complexity for Convexly-Constrained Optimization with Non-Lipschitzian Group Sparsity Terms

This paper studies high-order evaluation complexity for partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. We propose a partially separable adaptive regularization algorithm using a $p$-th order Taylor model and show that the algorithm can produce an (epsilon,delta)-approximate q-th-order stationary point in at most O(epsilon^{-(p+1)/(p-q+1)}) evaluations of the objective … Read more

Recovery of a mixture of Gaussians by sum-of-norms clustering

Sum-of-norms clustering is a method for assigning $n$ points in $\R^d$ to $K$ clusters, $1\le K\le n$, using convex optimization. Recently, Panahi et al.\ proved that sum-of-norms clustering is guaranteed to recover a mixture of Gaussians under the restriction that the number of samples is not too large. The purpose of this note is to … Read more

Non-asymptotic Results for Langevin Monte Carlo: Coordinate-wise and Black-box Sampling

Euler-Maruyama and Ozaki discretization of a continuous time diffusion process is a popular technique for sampling, that uses (upto) gradient and Hessian information of the density respectively. The Euler-Maruyama discretization has been used particularly for sampling under the name of Langevin Monte Carlo (LMC) for sampling from strongly log-concave densities. In this work, we make … Read more

Rank-one Convexification for Sparse Regression

Sparse regression models are increasingly prevalent due to their ease of interpretability and superior out-of-sample performance. However, the exact model of sparse regression with an L0 constraint restricting the support of the estimators is a challenging non-convex optimization problem. In this paper, we derive new strong convex relaxations for sparse regression. These relaxations are based … Read more

Fast Robust Methods for Singular State-Space Models

State-space models are used in a wide range of time series analysis applications. Kalman filtering and smoothing are work-horse algorithms in these settings. While classic algorithms assume Gaussian errors to simplify estimation, recent advances use a broad range of optimization formulations to allow outlier-robust estimation, as well as constraints to capture prior information. Here we … Read more

Best Subset Selection via Cross-validation Criterion

This paper is concerned with the cross-validation criterion for best subset selection in a linear regression model. In contrast with the use of statistical criteria (e.g., Mallows’ $C_p$, AIC, BIC, and various information criteria), the cross-validation only requires the mild assumptions, namely, samples are identically distributed, and training and validation samples are independent. For this … Read more

A Unified Framework for Sparse Relaxed Regularized Regression: SR3

Regularized regression problems are ubiquitous in statistical modeling, signal processing, and machine learning. Sparse regression in particular has been instrumental in scientific model discovery, including compressed sensing applications, vari- able selection, and high-dimensional analysis. We propose a broad framework for sparse relaxed regularized regression, called SR3. The key idea is to solve a relaxation of … Read more

Strong mixed-integer programming formulations for trained neural networks

We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or solving decision problems where the objective function is a machine learning model. … Read more

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is … Read more

Learning a Mixture of Gaussians via Mixed Integer Optimization

We consider the problem of estimating the parameters of a multivariate Gaussian mixture model (GMM) given access to $n$ samples $\x_1,\x_2,\ldots ,\x_n \in\mathbb{R}^d$ that are believed to have come from a mixture of multiple subpopulations. State-of-the-art algorithms used to recover these parameters use heuristics to either maximize the log-likelihood of the sample or try to … Read more