Sample Size Selection in Optimization Methods for Machine Learning

This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance … Read more

An Implementation of an Algorithm for Nonlinear Programming Based on Piecewise Linear Models

This is a progress report on an implementation of the active-set method for nonlinear programming proposed in [6] that employs piecewise linear models in the active-set prediction phase. The motivation for this work is to develop an algorithm that is capable of solving large-scale problems, including those with a large reduced space. Unlike SQP methods, … Read more

On the Use of Stochastic Hessian Information in Unconstrained Optimization

This paper describes how to incorporate stochastic curvature information in a Newton- CG method and in a limited memory quasi-Newton method for large scale optimization. The motivation for this work stems from statistical learning and stochastic optimization applications in which the objective function is the sum of a very large number of loss terms, and … Read more

A Line Search Exact Penalty Method Using Steering Rules

Line search algorithms for nonlinear programming must include safeguards to enjoy global convergence properties. This paper describes an exact penalization approach that extends the class of problems that can be solved with line search SQP methods. In the new algorithm, the penalty parameter is adjusted at every iteration to ensure sufficient progress in linear feasibility … Read more

Infeasibility Detection and SQP Methods for Nonlinear Optimization

This paper addresses the need for nonlinear programming algorithms that provide fast local convergence guarantees regardless of whether a problem is feasible or infeasible. We present a sequential quadratic programming method derived from an exact penalty approach that adjusts the penalty parameter automatically, when appropriate, to emphasize feasibility over optimality. The superlinear convergence of such … Read more

An Active-Set Algorithm for Nonlinear Programming Using Parametric Linear Programming

This paper describes an active-set algorithm for nonlinear programming that solves a parametric linear programming subproblem at each iteration to generate an estimate of the active set. A step is then computed by solving an equality constrained quadratic program based on this active-set estimate. This approach respresents an extension of the standard sequential linear-quadratic programming … Read more

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the … Read more

An Inexact SQP Method for Equality Constrained Optimization

We present an algorithm for large-scale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for large-scale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved … Read more

Knitro: An Integrated Package for Nonlinear Optimization

This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, … Read more

Steering Exact Penalty Methods for Optimization

This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be … Read more