A Parallel Bundle Method for Asynchronous Subspace Optimization in Lagrangian Relaxation

An algorithmic approach is proposed for exploiting parallelization possibilities in large scale optimization models of the following generic type. Objects change their state over time subject to a limited availability of common resources. These are modeled by linear coupling constraints and result in few objects competing for the same resource at each point in time. … Read more

Distributed Basis Pursuit

We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize … Read more

Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design

In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable (potential bar) to be updated in the next iteration. Both serial methods enjoy an O(n/k) iteration complexity … Read more

Scalable Stochastic Optimization of Complex Energy Systems

We present a scalable approach and implementation for solving stochastic programming problems, with application to the optimization of complex energy systems under uncertainty. Stochastic programming is used to make decisions in the present while incorporating a model of uncertainty about future events (scenarios). These problems present serious computational difficulties as the number of scenarios becomes … Read more

A Parallel Inertial Proximal Optimization Method

The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of … Read more

A biased random-key genetic algorithm for the Steiner triple covering problem

We present a biased random-key genetic algorithm (BRKGA) for finding small covers of computationally difficult set covering problems that arise in computing the 1-width of incidence matrices of Steiner triple systems. Using a parallel implementation of the BRKGA, we compute improved covers for the two largest instances in a standard set of test problems used … Read more

On the parallel solution of dense saddle-point linear systems arising in stochastic programming

We present a novel approach for solving dense saddle-point linear systems in a distributed-memory environment. This work is motivated by an application in stochastic optimization problems with recourse, but the proposed approach can be used for a large family of dense saddle-point systems, in particular those arising in convex programming. Although stochastic optimization problems have … Read more

A preconditioning technique for Schur complement systems arising in stochastic optimization

Deterministic sample average approximations of stochastic programming problems with recourse are suitable for a scenario-based, treelike parallelization with interior-point methods and a Schur complement mechanism. However, the direct linear solves involving the Schur complement matrix are expensive, and adversely a ect the scalability of this approach. In this paper we propose a stochastic preconditioner to address … Read more

Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms

Run time distributions or time-to-target plots are very useful tools to characterize the running times of stochastic algorithms for combinatorial optimization. We further explore run time distributions and describe a new tool to compare two algorithms based on stochastic local search. For the case where the running times of both algorithms fit exponential distributions, we … Read more

Python Optimization Modeling Objects (Pyomo)

We describe Pyomo, an open source tool for modeling optimization applications in Python. Pyomo can be used to de fine symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, but Pyomo’s modeling objects are … Read more