Fast implementation for semidefinite programs with positive matrix completion

Solving semidefinite programs (SDP) in a short time is the key to managing various mathematical optimization problems in practical time. The matrix-completion primal-dual interior-point method (MC-PDIPM) extracts a structural sparsity of input SDP by factorizing the variable matrices, and it shrinks the computation time. In this paper, we propose a new factorization based on the … Read more

PEBBL: An Object-Oriented Framework for Scalable Parallel Branch and Bound

PEBBL is a C++ class library implementing the underlying operations needed to support a wide variety of branch-and-bound algorithms in a message-passing parallel computing environment. Deriving application-speci c classes from PEBBL, one may create parallel branch-and-bound applications through a process focused on the unique aspects of the application, while relying on PEBBL for generic aspects of … Read more

Smooth minimization of nonsmooth functions with parallel coordinate descent methods

We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions. The problem class includes as a special case L1-regularized L1 regression and the minimization of the exponential loss (“AdaBoost problem”). We assume the input data defining the loss function is contained … Read more

A scenario decomposition algorithm for 0-1 stochastic programs

We propose a scenario decomposition algorithm for stochastic 0-1 programs. The algorithm recovers an optimal solution by iteratively exploring and cutting-off candidate solutions obtained from solving scenario subproblems. The scheme is applicable to quite general problem structures and can be implemented in a distributed framework. Illustrative computational results on standard two-stage stochastic integer programming and … Read more

A Parallel Bundle Framework for Asynchronous Subspace Optimisation of Nonsmooth Convex Functions

An algorithmic framework is presented for optimising general convex functions by non synchronised parallel processes. Each process greedily picks a suitable adaptive subset of coordinates and runs a bundle method on a corresponding restricted problem stopping whenever a descent step is encountered or predicted decrease is reduced sufficiently. No prior knowledge on the dependencies between … Read more

Using diversification, communication and parallelism to solve mixed-integer linear programs

Performance variability of modern mixed-integer programming solvers and possible ways of exploiting this phenomenon present an interesting opportunity in the development of algorithms to solve mixed-integer linear programs (MILPs). We propose a framework using multiple branch-and-bound trees to solve MILPs while allowing them to share information in a parallel execution. We present computational results on … Read more

Trace-Penalty Minimization for Large-scale Eigenspace Computation

The Rayleigh-Ritz (RR) procedure, including orthogonalization, constitutes a major bottleneck in computing relatively high dimensional eigenspaces of large sparse matrices. Although operations involved in RR steps can be parallelized to a certain level, their parallel scalability, which is limited by some inherent sequential steps, is lower than dense matrix-matrix multiplications. The primary motivation of this … Read more

Embedded Online Optimization for Model Predictive Control at Megahertz Rates

Faster, cheaper, and more power efficient optimization solvers than those currently offered by general-purpose solutions are required for extending the use of model predictive control (MPC) to resource-constrained embedded platforms. We propose several custom computational architectures for different first-order optimization methods that can handle linear-quadratic MPC problems with input, input-rate, and soft state constraints. We … Read more

Parallel Coordinate Descent Methods for Big Data Optimization

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed … Read more

AN INEXACT PERTURBED PATH-FOLLOWING METHOD FOR LAGRANGIAN DECOMPOSITION IN LARGE-SCALE SEPARABLE CONVEX OPTIMIZATION

This paper studies an inexact perturbed path-following algorithm in the framework of Lagrangian dual decomposition for solving large-scale separable convex programming problems. Unlike the exact versions considered in the literature, we propose to solve the primal subproblems inexactly up to a given accuracy. This leads to an inexactness of the gradient vector and the Hessian … Read more