Efficient Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region … Read more

The implicit convex feasibility problem and its application to adaptive image denoising

The implicit convex feasibility problem attempts to find a point in the intersection of a finite family of convex sets, some of which are not explicitly determined but may vary. We develop simultaneous and sequential projection methods capable of handling such problems and demonstrate their applicability to image denoising in a specific medical imaging situation. … Read more

Optimization Methods for Large-Scale Machine Learning

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of … Read more

Application of Facial Reduction to \infty$ State Feedback Control Problem

One often encounters numerical difficulties in solving linear matrix inequality (LMI) problems obtained from $H_\infty$ control problems. We discuss the reason from the viewpoint of optimization, and provide necessary and sufficient conditions for LMI problem and its dual not to be strongly feasible. Moreover, we interpret them in terms of control system. In this analysis, … Read more

Order-based error for managing ensembles of surrogates in derivative-free optimization

We investigate surrogate-assisted strategies for derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce … Read more

TMAC: A Toolbox of Modern Async-Parallel, Coordinate, Splitting, and Stochastic Methods

TMAC is a toolbox written in C++11 that implements algorithms based on a set of mod- ern methods for large-scale optimization. It covers a variety of optimization problems, which can be both smooth and nonsmooth, convex and nonconvex, as well as constrained and unconstrained. The algorithms implemented in TMAC, such as the coordinate up- date … Read more

A Multi-step Inertial Forward–Backward Splitting Method for Non-convex Optimization

In this paper, we propose a multi-step inertial Forward–Backward splitting algorithm for minimizing the sum of two non-necessarily convex functions, one of which is proper lower semi-continuous while the other is differentiable with a Lipschitz continuous gradient. We first prove global convergence of the scheme with the help of the Kurdyka-Lojasiewicz property. Then, when the … Read more

Local Convergence Properties of Douglas–Rachford and ADMM

The Douglas–Rachford (DR) and alternating direction method of multipliers (ADMM) are two proximal splitting algorithms designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local linear convergence behaviour of DR/ADMM when the involved functions are moreover … Read more

The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM

We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among … Read more

Barzilai-Borwein Step Size for Stochastic Gradient Descent

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step … Read more