Improved dynamic programming and approximation results for the knapsack problem with setups

We consider the 0-1 Knapsack Problem with Setups (KPS). Items are grouped into families and if any items of a family are packed, this induces a setup cost as well as a setup resource consumption. We introduce a new dynamic programming algorithm which performs much better than a previous dynamic program and turns out to … Read more

Invariance conditions for nonlinear dynamical systems

Recently, Horv\’ath, Song, and Terlaky [\emph{A novel unified approach to invariance condition of dynamical system, submitted to Applied Mathematics and Computation}] proposed a novel unified approach to study, i.e., invariance conditions, sufficient and necessary conditions, under which some convex sets are invariant sets for linear dynamical systems. In this paper, by utilizing analogous methodology, we … Read more

Steplength thresholds for invariance preserving of discretization methods of dynamical systems on a polyhedron

Steplength thresholds for invariance preserving of three types of discretization methods on a polyhedron are considered. For Taylor approximation type discretization methods we prove that a valid steplength threshold can be obtained by finding the first positive zeros of a finite number of polynomial functions. Further, a simple and efficient algorithm is proposed to numerically … Read more

Frechet inequalities via convex optimization

Quantifying the risk carried by an aggregate position $S_d\defn\sum_{i=1}^d X_i$ comprising many risk factors $X_i$ is fundamental to both insurance and financial risk management. Frechet inequalities quantify the worst-case risk carried by the aggregate position given distributional information concerning its composing factors but without assuming independence. This marginal factor modeling of the aggregate position in … Read more

Parallel stochastic line search methods with feedback for minimizing finite sums

We consider unconstrained minimization of a finite sum of $N$ continuously differentiable, not necessarily convex, cost functions. Several gradient-like (and more generally, line search) methods, where the full gradient (the sum of $N$ component costs’ gradients) at each iteration~$k$ is replaced with an inexpensive approximation based on a sub-sample~$\mathcal N_k$ of the component costs’ gradients, … Read more

A Simplified Form of Block-Iterative Operator Splitting, and an Asynchronous Algorithm Resembling the Multi-Block ADMM

This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computing model inspired by modern HPC environments, which consist of interconnected nodes each having … Read more

Doubly Nonnegative Relaxations for Quadratic and Polynomial Optimization Problems with Binary and Box Constraints

We propose a doubly nonnegative (DNN) relaxation for polynomial optimization problems (POPs) with binary and box constraints. This work is an extension of the work by Kim, Kojima and Toh in 2016 from quadratic optimization problems (QOPs) to POPs. The dense and sparse DNN relaxations are reduced to a simple conic optimization problem (COP) to … Read more

Complete Description of Matching Polytopes with One Linearized Quadratic Term for Bipartite Graphs

We consider, for complete bipartite graphs, the convex hulls of characteristic vectors of matchings, extended by a binary number indicating whether the matching contains two specific edges. This polytope is associated to the quadratic matching problem with a single linearized quadratic term. We provide a complete irredundant inequality description, which settles a conjecture by Klein … Read more

A derivative-free trust-region augmented Lagrangian algorithm

We present a new derivative-free trust-region (DFTR) algorithm to solve general nonlinear constrained problems with the use of an augmented Lagrangian method. No derivatives are used, neither for the objective function nor for the constraints. An augmented Lagrangian method, known as an effective tool to solve equality and inequality constrained optimization problems with derivatives, is … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ($\SFO$). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case … Read more