Quadratic convergence to the optimal solution of second-order conic optimization without strict complementarity

Under primal and dual nondegeneracy conditions, we establish the quadratic convergence of Newton’s method to the unique optimal solution of second-order conic optimization. Only very few approaches have been proposed to remedy the failure of strict complementarity, mostly based on nonsmooth analysis of the optimality conditions. Our local convergence result depends on the optimal partition … Read more

Algorithms and Software for the Golf Director Problem

The golf director problem introduced in Pavlikov et al. (2014) is a sports management problem that aims to find an allocation of golf players into fair teams for certain golf club competitions. The motivation for the problem is that club golf competitions are recreational events where the golf director wants to form teams that are … Read more

Regularization via Mass Transportation

The goal of regression and classification methods in supervised learning is to minimize the empirical risk, that is, the expectation of some loss function quantifying the prediction error under the empirical distribution. When facing scarce training data, overfitting is typically mitigated by adding regularization terms to the objective that penalize hypothesis complexity. In this paper … Read more

Trust-Region Algorithms for Training Responses: Machine Learning Methods Using Indefinite Hessian Approximations

Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent are easily scaled for very large problems but may involve fine-tuning many hyper-parameters. Quasi-Newton approaches based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update typically do not require manually tuning hyper-parameters but … Read more

Tighter McCormick Relaxations through Subgradient Propagation

Tight convex and concave relaxations are of high importance in the field of deterministic global optimization. We present a heuristic to tighten relaxations obtained by the McCormick technique. We use the McCormick subgradient propagation (Mitsos et al., SIAM J. Optim., 2009) to construct simple affine under- and overestimators of each factor of the original factorable … Read more

BASBL: Branch-And-Sandwich BiLevel solver. II. Implementation and computational study with the BASBLib test set

We describe BASBL, our implementation of the deterministic global optimization algorithm Branch-and-Sandwich for nonconvex/nonlinear bilevel problems, within the open-source MINOTAUR framework. The solver incorporates the original Branch-and-Sandwich algorithm and modifications proposed in the first part of this work. We also introduce BASBLib, an extensive online library of bilevel benchmark problems collected from the literature and … Read more

Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs

We introduce Sieve-SDP, a simple algorithm to preprocess semidefinite programs (SDPs). Sieve-SDP belongs to the class of facial reduction algorithms. It inspects the constraints of the problem, deletes redundant rows and columns, and reduces the size of the variable matrix. It often detects infeasibility. It does not rely on any optimization solver: the only subroutine … Read more

Best subset selection of factors affecting influenza spread using bi-objective optimization

A typical approach for computing an optimal strategy for non-pharmaceutical interventions during an influenza outbreak is based on statistical ANOVA. In this study, for the first time, we propose to use bi-objective mixed integer linear programming. Our approach employs an existing agent-based simulation model and statistical design of experiments presented in Martinez and Das (2014) … Read more

BASBL: Branch-And-Sandwich BiLevel solver I. Theoretical advances and algorithmic improvements

In this paper, we consider the global solution of bilevel programs involving nonconvex functions. We present algorithmic improvements and extensions to the recently proposed deterministic Branch-and-Sandwich algorithm (Kleniati and Adjiman, J. Glob. Opt. 60, 425–458, 2014), based on the theoretical results and heuristics. Choices in the way each step of the Branch-and-Sandwich algorithm is tackled, … Read more

Convergence rates of accelerated proximal gradient algorithms under independent noise

We consider an accelerated proximal gradient algorithm for the composite optimization with “independent errors” (errors little related with historical information) for solving linear inverse problems. We present a new inexact version of FISTA algorithm considering deterministic and stochastic noises. We prove some convergence rates of the algorithm and we connect it with the current existing … Read more