A lower bound on the iterative complexity of the Harker and Pang globalization technique of the Newton-min algorithm for solving the linear complementarity problem

The plain Newton-min algorithm for solving the linear complementarity problem (LCP) 0 ≤ x ⊥ (Mx+q) ≥ 0 can be viewed as an instance of the plain semismooth Newton method on the equational version min(x,Mx+q) = 0 of the problem. This algorithm converges for any q when M is an M-matrix, but not when it … Read more

Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization

There has been much recent interest in finding unconstrained local minima of smooth functions, due in part of the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a … Read more

A Parallel Line Search Subspace Correction Method for Composite Convex Optimization

In this paper, we investigate a parallel subspace correction framework for composite convex optimization. The variables are first divided into a few blocks based on certain rules. At each iteration, the algorithms solve a suitable subproblem on each block simultaneously, construct a search direction by combining their solutions on all blocks, then identify a new … Read more

Boosting the Feasibility Pump

The Feasibility Pump (FP) has proved to be an effective method for finding feasible solutions to mixed integer programming problems. FP iterates between a rounding procedure and a projection procedure, which together provide a sequence of points alternating between LP feasible but fractional solutions, and integer but LP relaxed infeasible solutions. The process attempts to … Read more

Line search methods with variable sample size for unconstrained optimization

Minimization of unconstrained objective function in the form of mathematical expectation is considered. Sample Average Approximation – SAA method transforms the expectation objective function into a real-valued deterministic function using large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable … Read more

FAST FIRST-ORDER METHODS FOR COMPOSITE CONVEX OPTIMIZATION WITH BACKTRACKING

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA \cite{Beck-Teboulle-2009} and FALM algorithms \cite{Goldfarb-Ma-Scheinberg-2010} while preserving their worst-case iteration complexities of $O(\sqrt{L(f)/\epsilon})$. … Read more

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447–3475, 2010), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to … Read more

A Dwindling Filter Line Search Method for Unconstrained Optimization

In this paper, we propose a new dwindling multidimensional filter second-order line search method for solving large-scale unconstrained optimization problems. Usually, the multidimensional filter is constructed with a fixed envelope, which is a strict condition for the gradient vectors. A dwindling multidimensional filter technique, which is a modification and improvement of the original multidimensional filter, … Read more

Nonsmooth Optimization via BFGS

We investigate the BFGS algorithm with an inexact line search when applied to nonsmooth functions, not necessarily convex. We define a suitable line search and show that it generates a sequence of nested intervals containing points satisfying the Armijo and weak Wolfe conditions, assuming only absolute continuity. We also prove that the line search terminates … Read more

Accelerated line-search and trust-region methods

In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of … Read more