Subdifferentials and SNC property of scalarization functionals with uniform level sets and applications

This paper deals with necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets by using a well-known nonlinear scalarization functional with uniform level sets (called Gerstewitz’ functional in the literature). The primary objective of this work is to establish revised formulas for basic and singular subdifferentials of … Read more

Inexact restoration with subsampled trust-region methods for finite-sum minimization

Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function, gradient and Hessian … Read more

Quasi-Newton Methods for Deep Learning: Forget the Past, Just Sample

We present two sampled quasi-Newton methods: sampled LBFGS and sampled LSR1. Contrary to the classical variants of these methods that sequentially build (inverse) Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate to produce these approximations. As a result, the approximations constructed make use of more reliable (recent … Read more

A study of rank-one sets with linear side constraints and application to the pooling problem

We study sets defined as the intersection of a rank-1 constraint with different choices of linear side constraints. We identify different conditions on the linear side constraints, under which the convex hull of the rank-1 set is polyhedral or second-order cone representable. In all these cases, we also show that a linear objective can be … Read more

Active-set Newton methods and partial smoothness

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active-set structure underlies the design of accelerated local algorithms of Newton type. … Read more

Chance-Constrained Bin Packing Problem with an Application to Operating Room Planning

We study the chance-constrained bin packing problem, with an application to hospital operating room planning. The bin packing problem allocates items of random size that follow a discrete distribution to a set of bins with limited capacity, while minimizing the total cost. The bin capacity constraints are satisfied with a given probability. We investigate a … Read more

Scheduling jobs with a V-shaped time-dependent processing time

In the field of time-dependent scheduling, a job’s processing time is specified by a function of its start time. While monotonic processing time functions are well-known in the literature, this paper introduces non-monotonic functions with a convex, piecewise-linear V-shape similar to the absolute value function. They are minimum at an ideal start time, which is … Read more

An extragradient method for solving variational inequalities without monotonicity

A new extragradient projection method is devised in this paper, which does not obviously require generalized monotonicity and assumes only that the so-called dual variational inequality has a solution in order to ensure its global convergence. In particular, it applies to quasimonotone variational inequality having a nontrivial solution. Article Download View An extragradient method for … Read more

Learning to Project in Multi-Objective Binary Linear Programming

In this paper, we investigate the possibility of improving the performance of multi-objective optimization solution approaches using machine learning techniques. Specifically, we focus on multi-objective binary linear programs and employ one of the most effective and recently developed criterion space search algorithms, the so-called KSA, during our study. This algorithm computes all nondominated points of … Read more

Rank-one Convexification for Sparse Regression

Sparse regression models are increasingly prevalent due to their ease of interpretability and superior out-of-sample performance. However, the exact model of sparse regression with an L0 constraint restricting the support of the estimators is a challenging non-convex optimization problem. In this paper, we derive new strong convex relaxations for sparse regression. These relaxations are based … Read more