A note on using performance and data profiles for training algorithms

It is shown how to use the performance and data profile benchmarking tools to improve algorithms’ performance. An illustration for the BFO derivative-free optimizer suggests that the obtained gains are potentially significant. Citation ACM Transactions on Mathematical Software, 45:2 (2019), Article 20. Article Download View A note on using performance and data profiles for training … Read more

Algorithms for the One-Dimensional Two-Stage Cutting Stock Problem

In this paper, we consider the two-stage extension of the one-dimensional cutting stock problem which arises when technical requirements inhibit the cutting of large stock rolls to demanded widths of finished rolls directly. Therefore, the demands on finished rolls are fulfilled through two subsequent cutting processes, in which the rolls produced in the former are … Read more

Block Coordinate Descent Almost Surely Converges to a Stationary Point Satisfying the Second-order Necessary Condition

Given a non-convex twice continuously differentiable cost function with Lipschitz continuous gradient, we prove that all of the block coordinate gradient descent, block mirror descent and proximal block coordinate descent methods converge to stationary points satisfying the second-order necessary condition, almost surely with random initialization. All our results are ascribed to the center-stable manifold theorem … Read more

Variational inequality formulation for the games with random payoffs

We consider an n-player non-cooperative game with random payoffs and continuous strategy set for each player. The random payoffs of each player are defined using a finite dimensional random vector. We formulate this problem as a chance-constrained game by defining the payoff function of each player using a chance constraint. We first consider the case … Read more

Optimization of Stochastic Problems with Probability Functions via Differential Evolution

Chance constrained programming, quantile/Value-at-Risk (VaR) optimization and integral quantile / Conditional Value-at-Risk (CVaR) optimization problems as Stochastic Programming Problems with Probability Functions (SPP-PF) are one of the most widely studied optimization problems in recent years. As a rule real-life SPP-PF is nonsmooth nonconvex optimization problem with complex geometry of objective function. Moreover, often it cannot … Read more

Convergent Prediction-Correction-based ADMM for multi-block separable convex programming

The direct extension of the classic alternating direction method with multipliers (ADMMe) to the multi-block separable convex optimization problem is not necessarily convergent, though it often performs very well in practice. In order to preserve the numerical advantages of ADMMe and obtain convergence, many modified ADMM were proposed by correcting the output of ADMMe or … Read more

Global Optimisation of Multi-Plant Manganese Alloy Production

This paper studies the problem of multi-plant manganese alloy production. The problem consists of finding the optimal furnace feed of ores, fluxes, coke, and slag that yields output products which meet customer specifications, and to optimally decide the volume, composition, and allocation of the slag. To solve the problem, a nonlinear pooling problem formulation is … Read more

Approximations to Stochastic Dynamic Programs via Information Relaxation Duality

In the analysis of complex stochastic dynamic programs, we often seek strong theoretical guarantees on the suboptimality of heuristic policies. One technique for obtaining performance bounds is perfect information analysis: this approach provides bounds on the performance of an optimal policy by considering a decision maker who has access to the outcomes of all future … Read more

On the Convergence Rate of the Halpern-Iteration

In this work, we give a tight estimate of the rate of convergence for the Halpern-Iteration for approximating a fixed point of a nonexpansive mapping in a Hilbert space. Specifically, we prove that the norm of the residuals is upper bounded by the distance of the initial iterate to the closest fixed point divided by … Read more

Accelerating block coordinate descent methods with identification strategies

This work is about active set identification strategies aimed at accelerating block-coordinate descent methods (BCDM) applied to large-scale problems. We start by devising an identification function tailored for bound-constrained composite minimization together with an associated version of the BCDM, called Active BCDM, that is also globally convergent. The identification function gives rise to an efficient … Read more