A Parallel Line Search Subspace Correction Method for Composite Convex Optimization

In this paper, we investigate a parallel subspace correction framework for composite convex optimization. The variables are first divided into a few blocks based on certain rules. At each iteration, the algorithms solve a suitable subproblem on each block simultaneously, construct a search direction by combining their solutions on all blocks, then identify a new … Read more

Boosting the Feasibility Pump

The Feasibility Pump (FP) has proved to be an effective method for finding feasible solutions to mixed integer programming problems. FP iterates between a rounding procedure and a projection procedure, which together provide a sequence of points alternating between LP feasible but fractional solutions, and integer but LP relaxed infeasible solutions. The process attempts to … Read more

Line search methods with variable sample size for unconstrained optimization

Minimization of unconstrained objective function in the form of mathematical expectation is considered. Sample Average Approximation – SAA method transforms the expectation objective function into a real-valued deterministic function using large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable … Read more

FAST FIRST-ORDER METHODS FOR COMPOSITE CONVEX OPTIMIZATION WITH BACKTRACKING

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA \cite{Beck-Teboulle-2009} and FALM algorithms \cite{Goldfarb-Ma-Scheinberg-2010} while preserving their worst-case iteration complexities of $O(\sqrt{L(f)/\epsilon})$. … Read more

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447–3475, 2010), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to … Read more

A Dwindling Filter Line Search Method for Unconstrained Optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multidimensional filter, … Read more

Nonsmooth Optimization via BFGS

We investigate the BFGS algorithm with an inexact line search when applied to nonsmooth functions, not necessarily convex. We define a suitable line search and show that it generates a sequence of nested intervals containing points satisfying the Armijo and weak Wolfe conditions, assuming only absolute continuity. We also prove that the line search terminates … Read more

Accelerated line-search and trust-region methods

In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of … Read more

Optimization of univariate functions on bounded intervals by interpolation and semidefinite programming

We consider the problem of minimizing a univariate, real-valued function f on an interval [a,b]. When f is a polynomial, we review how this problem may be reformulated as a semidefinite programming (SDP) problem, and review how to extract all global minimizers from the solution of the SDP problem. For general f, we approximate the … Read more

When LP is not a good idea – using structure in polyhedral optimization problems

It has been known for almost 50 years that the discrete l_1 approximation problem can be solved effectively by linear programming. However, improved algorithms involve a step which can be interpreted as a line search, and which is not part of the standard LP solution procedures. l_1 provides the simplest example of a class of … Read more