Constrained Optimization in the Presence of Noise

The problem of interest is the minimization of a nonlinear function subject to nonlinear equality constraints using a sequential quadratic programming (SQP) method. The minimization must be performed while observing only noisy evaluations of the objective and constraint functions. In order to obtain stability, the classical SQP method is modified by relaxing the standard Armijo … Read more

Constrained and Composite Optimization via Adaptive Sampling Methods

The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems in which the objective function is stochastic and the constraints are deterministic. The method proposed in this paper is a proximal gradient method that can also be applied to the composite optimization problem min f(x) … Read more

A Noise-Tolerant Quasi-Newton Method for Unconstrained Optimization

This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors. This work is motivated by applications that contain computational noise, employ low-precision arithmetic, or are subject to statistical noise. The classical BFGS and L-BFGS methods can fail in such circumstances because the updating procedure … Read more

Analysis of the BFGS Method with Errors

The classical convergence analysis of quasi-Newton methods assumes that the function and gradients employed at each iteration are exact. In this paper, we consider the case when there are (bounded) errors in both computations and establish conditions under which a slight modification of the BFGS algorithm with an Armijo-Wolfe line search converges to a neighborhood … Read more

Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods

This paper presents a finite difference quasi-Newton method for the minimization of noisy functions. The method takes advantage of the scalability and power of BFGS updating, and employs an adaptive procedure for choosing the differencing interval h based on the noise estimation techniques of Hamming (2012) and MorĂ© and Wild (2011). This noise estimation procedure … Read more

Adaptive Sampling Strategies for Stochastic Optimization

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase … Read more

Exact and Inexact Subsampled Newton Methods for Optimization

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of … Read more

A Stochastic Quasi-Newton Method for Large-Scale Optimization

Abstract The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust … Read more

An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization

We study a Newton-like method for the minimization of an objective function $\phi$ that is the sum of a smooth convex function and an $\ell_1$ regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model $q_k$ of the objective … Read more

A Family of Second-Order Methods for Convex L1-Regularized Optimization

This paper is concerned with the minimization of an objective that is the sum of a convex function $f$ and an $\ell_1$ regularization term. Our interest is in methods that incorporate second-order information about the function $f$ to accelerate convergence. We describe a semi-smooth Newton framework that can be used to generate a variety of … Read more