Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices … Read more

R-Linear Convergence of Limited Memory Steepest Descent

The limited memory steepest descent method (LMSD) proposed by Fletcher is an extension of the Barzilai-Borwein “two-point step size” strategy for steepest descent methods for solving unconstrained optimization problems. It is known that the Barzilai-Borwein strategy yields a method with an R-linear rate of convergence when it is employed to minimize a strongly convex quadratic. … Read more

Optimized choice of parameters in interior-point methods for linear programming

In this work, we propose a predictor-corrector interior point method for linear programming in a primal-dual context, where the next iterate is chosen by the minimization of a polynomial merit function of three variables: the first is the steplength, the second defines the central path and the third models the weight of a corrector direction. … Read more

Can linear superiorization be useful for linear optimization problems?

Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear … Read more

Linear superiorization for infeasible linear programming

Linear superiorization (abbreviated: LinSup) considers linear programming (LP) problems wherein the constraints as well as the objective function are linear. It allows to steer the iterates of a feasibility-seeking iterative process toward feasible points that have lower (not necessarily minimal) values of the objective function than points that would have been reached by the same … Read more

An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. In particular, the proposed method enables … Read more

Block BFGS Methods

We introduce a quasi-Newton method with block updates called Block BFGS. We show that this method, performed with inexact Armijo-Wolfe line searches, converges globally and superlinearly under the same convexity assumptions as BFGS. We also show that Block BFGS is globally convergent to a stationary point when applied to non-convex functions with bounded Hessian, and … Read more

A New First-order Algorithmic Framework for Optimization Problems with Orthogonality Constraints

In this paper, we consider a class of optimization problems with orthogonality constraints, the feasible region of which is called the Stiefel manifold. Our new framework combines a function value reduction step with a correction step. Different from the existing approaches, the function value reduction step of our algorithmic framework searches along the standard Euclidean … Read more

Exact and Inexact Subsampled Newton Methods for Optimization

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of … Read more

A recursive semi-smooth Newton method for linear complementarity problems

A primal feasible active set method is presented for finding the unique solution of a Linear Complementarity Problem (LCP) with a P-matrix, which extends the globally convergent active set method for strictly convex quadratic problems with simple bounds proposed by [P. Hungerlaender and F. Rendl. A feasible active set method for strictly convex problems with … Read more