An Adaptive Unified Differential Evolution Algorithm for Global Optimization

In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strategies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader … Read more

Block stochastic gradient iteration for convex and nonconvex optimization

The stochastic gradient (SG) method can minimize an objective function composed of a large number of differentiable functions, or solve a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, handles problems with multiple blocks of variables by updating them one at a time; when the blocks … Read more

Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. … Read more

A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization

The Quadratic Convex Reformulation (QCR) method is used to solve quadratic unconstrained binary optimization problems. In this method, the semidefinite relaxation is used to reformulate it to a convex binary quadratic program which is solved using mixed integer quadratic programming solvers. We extend this method to random quadratic unconstrained binary optimization problems. We develop a … Read more

Biased and unbiased random-key genetic algorithms: An experimental analysis

We study the runtime performance of three types of random-key genetic algorithms: the unbiased algorithm of Bean (1994); the biased algorithm of Gonçalves and Resende (2011); and a greedy version of Bean’s algorithm on 12 instances from four types of covering problems: general-cost set covering, Steiner triple covering, general-cost set K-covering, and unit-cost covering by … Read more

STOCHASTIC OPTIMIZATION OVER A PARETO SET ASSOCIATED WITH A STOCHASTIC MULTI-OBJECTIVE OPTIMIZATION PROBLEM

We deal with the problem of minimizing the expectation of a real valued random function over the weakly Pareto or Pareto set associated with a Stochastic Multi-Objective Optimization Problem (SMOP) whose objectives are expectations of random functions. Assuming that the closed form of these expectations is difficult to obtain, we apply the Sample Average Approximation … Read more

Differerential Evolution methods based on local searches

In this paper we analyze the behavior of a quite standard Differential Evolution (DE) algorithm applied to the objective function transformed by means of local searches. First some surprising results are presented which concern the application of this method to standard test functions. Later we introduce an application to disk- and to sphere-packing problems, two … Read more

Simulation Optimization for the Stochastic Economic Lot Scheduling Problem with Sequence-Dependent Setup Times

We consider the stochastic economic lot scheduling problem (SELSP) with lost sales and random demand, where switching between products is subject to sequence-dependent setup times. We propose a solution based on simulation optimization using an iterative two-step procedure which combines global policy search with local search heuristics for the traveling salesman sequencing subproblem. To optimize … Read more

A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical … Read more

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

This paper proposes a new probabilistic algorithm for solving multi-objective optimization problems – Probability-Driven Search Algorithm. The algorithm uses probabilities to control the process in search of Pareto optimal solutions. Especially, we use the absorbing Markov Chain to argue the convergence of the algorithm. We test this approach by implementing the algorithm on some benchmark … Read more