Stochastic algorithms with geometric step decay converge linearly on sharp functions

Stochastic (sub)gradient methods require step size schedule tuning to perform well in practice. Classical tuning strategies decay the step size polynomially and lead to optimal sublinear rates on (strongly) convex problems. An alternative schedule, popular in nonconvex optimization, is called geometric step decay and proceeds by halving the step size after every few epochs. In … Read more

The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM

We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among … Read more

The Asynchronous PALM Algorithm for Nonsmooth Nonconvex Problems

We introduce the Asynchronous PALM algorithm, a new extension of the Proximal Alternating Linearized Minimization (PALM) algorithm for solving nonconvex nonsmooth optimization problems. Like the PALM algorithm, each step of the Asynchronous PALM algorithm updates a single block of coordinates; but unlike the PALM algorithm, the Asynchronous PALM algorithm eliminates the need for sequential updates … Read more

SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm

We introduce the Stochastic Monotone Aggregated Root-Finding (SMART) algorithm, a new randomized operator-splitting scheme for finding roots of finite sums of operators. These algorithms are similar to the growing class of incremental aggregated gradient algorithms, which minimize finite sums of functions; the difference is that we replace gradients of functions with black-boxes called operators, which … Read more

A Python/C library for bound-constrained global optimization with continuous GRASP

This paper describes libcgrpp, a GNU-style dynamic shared Python/C library of the continuous greedy randomized adaptive search procedure (C-GRASP) for bound constrained global optimization. C-GRASP is an extension of the GRASP metaheuristic (Feo and Resende, 1989). After a brief introduction to C-GRASP, we show how to download, install, configure, and use the library through an … Read more

Solving systems of nonlinear equations with continuous GRASP

A method for finding all roots of a system of nonlinear equations is described. Our method makes use of C-GRASP, a recently proposed continuous global optimization heuristic. Given a nonlinear system, we solve a corresponding adaptively modified global optimization problem multiple times, each time using C-GRASP, with areas of repulsion around roots that have already … Read more

Speeding up continuous GRASP

Continuous GRASP (C-GRASP) is a stochastic local search metaheuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints (Hirsch et al., 2006). Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. In … Read more

Global optimization by continuous GRASP

We introduce a novel global optimization method called Continuous GRASP (C-GRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus … Read more