Sharpness and well-conditioning of nonsmooth convex formulations in statistical signal recovery

We study a sample complexity vs. conditioning tradeoff in modern signal recovery problems where convex optimization problems are built from sampled observations. We begin by introducing a set of condition numbers related to sharpness in \(\ell_p\) or Schatten-p norms (\(p\in[1,2]\)) based on nonsmooth reformulations of a class of convex optimization problems, including sparse recovery, low-rank … Read more

Provably Faster Gradient Descent via Long Steps

This work establishes provably faster convergence rates for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We … Read more

Optimization-based Learning for Dynamic Load Planning in Trucking Service Networks

CitationOjha, R., Chen, W., Zhang, H., Khir, R., Erera, A. & Van Hentenryck, P. (2023). Optimization-based Learning for Dynamic Load Planning in Trucking Service Networks.ArticleDownload View PDF

Variational Theory and Algorithms for a Class of Asymptotically Approachable Nonconvex Problems

We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function may fail to be locally Lipschitz continuous. It covers a range of important yet … Read more

Diagonal Partitioning Strategy Using Bisection of Rectangles and a Novel Sampling Scheme

In this paper we consider a global optimization problem, where the objective function is supposed to be Lipschitz-continuous with an unknown Lipschitz constant. Based on the recently introduced BIRECT (BIsection of RECTangles) algorithm, a new diagonal partitioning and sampling scheme is introduced. 0ur framework, called BIRECT-V (where V stands for vertices), combines bisection with sampling … Read more

A novel UCB-based batch strategy for Bayesian optimization

The optimization of expensive black-box functions appears in many situations. Bayesian optimization methods have been successfully applied to solve these prob- lems using well-known single-point acquisition functions. Nowadays, the develop- ments in technology allow us to perform evaluations of some of these expensive function in parallel. Therefore, there is a need for batch infill criteria … Read more

Shattering Inequalities for Learning Optimal Decision Trees

Recently, mixed-integer programming (MIP) techniques have been applied to learn optimal decision trees. Empirical research has shown that optimal trees typically have better out-of-sample performance than heuristic approaches such as CART. However, the underlying MIP formulations often suffer from weak linear programming (LP) relaxations. Many existing MIP approaches employ big-M constraints to ensure observations are … Read more

Recycling Valid Inequalities for Robust Combinatorial Optimization with Budget Uncertainty

Robust combinatorial optimization with budget uncertainty is one of the most popular approaches for integrating uncertainty into optimization problems. The existence of a compact reformulation for (mixed-integer) linear programs and positive complexity results give the impression that these problems are relatively easy to solve. However, the practical performance of the reformulation is quite poor when … Read more

Constraint qualifications and strong global convergence properties of an augmented Lagrangian method on Riemannian manifolds

In the past years, augmented Lagrangian methods have been successfully applied to several classes of non-convex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have … Read more

Regularized methods via cubic subspace minimization for nonconvex optimization

The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step \(s_k\), which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is … Read more