A harmonic framework for stepsize selection in gradient methods

We study the use of inverse harmonic Rayleigh quotients with target for the stepsize selection in gradient methods for nonlinear unconstrained optimization problems. This provides not only an elegant and flexible framework to parametrize and reinterpret existing stepsize schemes, but also gives inspiration for new flexible and tunable families of steplengths. In particular, we analyze … Read more

Global convergence and acceleration of projection methods for feasibility problems involving union convex sets

We prove global convergence of classical projection algorithms for feasibility problems involving union convex sets, which refer to sets expressible as the union of a finite number of closed convex sets. We present a unified strategy for analyzing global convergence by means of studying fixed-point iterations of a set-valued operator that is the union of … Read more

An Exact Algorithm for the Two-echelon Vehicle Routing Problem with Drones

This paper studies a new variant of the vehicle routing problem with drones, i.e., the two-echelon vehicle routing problem with drones, where multiple vehicles and drones work collaboratively to serve customers. Drones can perform multiple back-and-forth trips when their paired vehicle stops at a customer node, forming a two-echelon network. Several practical constraints such as … Read more

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with $\ell_1$-loss, where the goal is to recover a low-rank matrix from a limited number of measurements, a subset of which may be grossly corrupted with noise. We study a scenario where the … Read more

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

Rank-one Boolean tensor factorization and the multilinear polytope

We consider the NP-hard problem of approximating a tensor with binary entries by a rank-one tensor, referred to as rank-one Boolean tensor factorization problem. We formulate this problem, in an extended space of variables, as the problem of minimizing a linear function over a highly structured multilinear set. Leveraging on our prior results regarding the … Read more

Ensemble Methods for Robust Support Vector Machines using Integer Programming

In this work we study binary classification problems where we assume that our training data is subject to uncertainty, i.e. the precise data points are not known. To tackle this issue in the field of robust machine learning the aim is to develop models which are robust against small perturbations in the training data. We … Read more

A solver for multiobjective mixed-integer convex and nonconvex optimization

This paper proposes a general framework for solving multiobjective nonconvex optimization problems, i.e., optimization problems in which multiple objective functions have to be optimized simultaneously. Thereby, the nonconvexity might come from the objective or constraint functions, or from integrality conditions for some of the variables. In particular, multiobjective mixed-integer convex and nonconvex optimization problems are … Read more

Solving Two-Trust-Region Subproblems using Semidefinite Optimization with Eigenvector Branching

Semidefinite programming (SDP) problems typically utilize the constraint that X-xx’ is PSD to obtain a convex relaxation of the condition X=xx’, where x is an n-vector. In this paper we consider a new hyperplane branching method for SDP based on using an eigenvector of X-xx’. This branching technique is related to previous work of Saxeena, … Read more

Portfolio optimization in the presence of estimation errors on the expected asset returns

It is well known that the classical Markowitz model for portfolio optimization is extremely sensitive to estimation errors on the expected asset returns. Robust optimization mitigates this issue. We focus on ellipsoidal uncertainty sets around the point estimates of the expected asset returns. We investigate the performance of diagonal estimation-error matrices in the description of … Read more