A Parallel Inertial Proximal Optimization Method

The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of … Read more

A biased random-key genetic algorithm for the Steiner triple covering problem

We present a biased random-key genetic algorithm (BRKGA) for finding small covers of computationally difficult set covering problems that arise in computing the 1-width of incidence matrices of Steiner triple systems. Using a parallel implementation of the BRKGA, we compute improved covers for the two largest instances in a standard set of test problems used … Read more

On the parallel solution of dense saddle-point linear systems arising in stochastic programming

We present a novel approach for solving dense saddle-point linear systems in a distributed-memory environment. This work is motivated by an application in stochastic optimization problems with recourse, but the proposed approach can be used for a large family of dense saddle-point systems, in particular those arising in convex programming. Although stochastic optimization problems have … Read more

A preconditioning technique for Schur complement systems arising in stochastic optimization

Deterministic sample average approximations of stochastic programming problems with recourse are suitable for a scenario-based, treelike parallelization with interior-point methods and a Schur complement mechanism. However, the direct linear solves involving the Schur complement matrix are expensive, and adversely a ect the scalability of this approach. In this paper we propose a stochastic preconditioner to address … Read more

Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms

Run time distributions or time-to-target plots are very useful tools to characterize the running times of stochastic algorithms for combinatorial optimization. We further explore run time distributions and describe a new tool to compare two algorithms based on stochastic local search. For the case where the running times of both algorithms fit exponential distributions, we … Read more

Python Optimization Modeling Objects (Pyomo)

We describe Pyomo, an open source tool for modeling optimization applications in Python. Pyomo can be used to de fine symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, but Pyomo’s modeling objects are … Read more

On String-Averaging for Sparse Problems and On the Split Common Fixed Point Problem

We review the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We present our recent definition of sparseness of a family of operators and discuss a string-averaging algorithmic scheme that favorably handles the common fixed points problem … Read more

Asset-Liability Management Modelling with Risk Control by Stochastic Dominance

An Asset-Liability Management model with a novel strategy for controlling risk of underfunding is presented in this paper. The basic model involves multiperiod decisions (portfolio rebalancing) and deals with the usual uncertainty of investment returns and future liabilities. Therefore it is well-suited to a stochastic programming approach. A stochastic dominance concept is applied to measure … Read more

Hybrid MPI/OpenMP parallel support vector machine training

Support Vector Machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally challenging. A parallel implementation of Support Vector Machine training has been developed, using a combination of MPI and OpenMP. Using an interior point method for the optimization and a reformulation that avoids the … Read more

Efficient high-precision dense matrix algebra on parallel architectures for nonlinear discrete optimization

We provide a proof point for the idea that matrix-based algorithms for discrete optimization problems, mainly conceived for proving theoretical efficiency, can be easily and efficiently implemented on massively-parallel architectures by exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision dense linear algebra. We have successfully implemented our algorithm on the Blue Gene/L … Read more