Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with $\ell_1$-loss, where the goal is to recover a low-rank matrix from a limited number of measurements, a subset of which may be grossly corrupted with noise. We study a scenario where the … Read more

Rank-one Boolean tensor factorization and the multilinear polytope

We consider the NP-hard problem of approximating a tensor with binary entries by a rank-one tensor, referred to as rank-one Boolean tensor factorization problem. We formulate this problem, in an extended space of variables, as the problem of minimizing a linear function over a highly structured multilinear set. Leveraging on our prior results regarding the … Read more

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

Ensemble Methods for Robust Support Vector Machines using Integer Programming

In this work we study binary classification problems where we assume that our training data is subject to uncertainty, i.e. the precise data points are not known. To tackle this issue in the field of robust machine learning the aim is to develop models which are robust against small perturbations in the training data. We … Read more

A solver for multiobjective mixed-integer convex and nonconvex optimization

This paper proposes a general framework for solving multiobjective nonconvex optimization problems, i.e., optimization problems in which multiple objective functions have to be optimized simultaneously. Thereby, the nonconvexity might come from the objective or constraint functions, or from integrality conditions for some of the variables. In particular, multiobjective mixed-integer convex and nonconvex optimization problems are … Read more

Solving Two-Trust-Region Subproblems using Semidefinite Optimization with Eigenvector Branching

Semidefinite programming (SDP) problems typically utilize the constraint that X-xx’ is PSD to obtain a convex relaxation of the condition X=xx’, where x is an n-vector. In this paper we consider a new hyperplane branching method for SDP based on using an eigenvector of X-xx’. This branching technique is related to previous work of Saxeena, … Read more

Portfolio optimization in the presence of estimation errors on the expected asset returns

It is well known that the classical Markowitz model for portfolio optimization is extremely sensitive to estimation errors on the expected asset returns. Robust optimization mitigates this issue. We focus on ellipsoidal uncertainty sets around the point estimates of the expected asset returns. We investigate the performance of diagonal estimation-error matrices in the description of … Read more

An inexact ADMM with proximal-indefinite term and larger stepsize

In this paper, an inexact Alternating Direction Method of Multipliers (ADMM) has been proposed for solving the two-block separable convex optimization problem subject to linear equality constraints. The first resulting subproblem is solved inexactly under relative error criterion, while another subproblem called regularization problem is solved inexactly by introducing an indefinite proximal term. Meanwhile, the … Read more

Theoretical Insights and a New Class of Valid Inequalities for the Temporal Bin Packing Problem with Fire-Ups

The temporal bin packing problem with fire-ups (TBPP-FU) is a two-dimensional packing problem where one geometric dimension is replaced by a time horizon. The given items (jobs) are characterized by a resource consumption, that occurs exclusively during an activity interval, and they have to be placed on servers so that the capacity constraint is respected … Read more

Efficiently Escaping Saddle Points in Bilevel Optimization

Bilevel optimization is one of the fundamental problems in machine learning and optimization. Recent theoretical developments in bilevel optimization focus on finding the first-order stationary points for nonconvex-strongly-convex cases. In this paper, we analyze algorithms that can escape saddle points in nonconvex-strongly-convex bilevel optimization. Specifically, we show that the perturbed approximate implicit differentiation (AID) with … Read more