A Sequential Quadratic Optimization Algorithm with Rapid Infeasibility Detection

We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed … Read more

Variational Properties of Value Functions

Regularization plays a key role in a variety of optimization formulations of inverse problems. A recurring question in regularization approaches is the selection of regularization parameters, and its effect on the solution and on the optimal value of the optimization problem. The sensitivity of the value function to the regularization parameter can be linked directly … Read more

Gradient consistency for integral-convolution smoothing functions

Chen and Mangasarian (1995) developed smoothing approximations to the plus function built on integral-convolution with density functions. X. Chen (2012) has recently picked up this idea constructing a large class of smoothing functions for nonsmooth minimization through composition with smooth mappings. In this paper, we generalize this idea by substituting the plus function for an … Read more

Epi-convergent Smoothing with Applications to Convex Composite Functions

Smoothing methods have become part of the standard tool set for the study and solution of nondifferentiable and constrained optimization problems as well as a range of other variational and equilibrium problems. In this note we synthesize and extend recent results due to Beck and Teboulle on infimal convolution smoothing for convex functions with those … Read more

Sparse/Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory

Piecewise linear quadratic (PLQ) penalties play a crucial role in many applications, including machine learning, robust statistical inference, sparsity promotion, and inverse problems such as Kalman smoothing. Well known examples of PLQ penalties include the l2, Huber, l1 and Vapnik losses. This paper builds on a dual representation for PLQ penalties known from convex analysis. … Read more

Robust and Trend-following Student’s t Kalman Smoothers

Two nonlinear Kalman smoothers are proposed using the Student’s t distribution. The first, which we call the T-Robust smoother, finds the maximum a posteriori (MAP) solution for Gaussian process noise and Student’s t observation noise. It is extremely robust against outliers, outperforming the recently proposed L1-Laplace smoother in extreme situations with data containing 20% or … Read more

ASTRAL: An Active Set \inftyhBcTrust-Region Algorithm for Box Constrained Optimization

An algorithm for solving large-scale nonlinear optimization problems with simple bounds is described. The algorithm is an $\ell_\infty$-norm trust-region method that uses both active set identification techniques as well as limited memory BFGS updating for the Hessian approximation. The trust-region subproblems are solved using primal-dual interior point techniques that exploit the structure of the limited … Read more

The Speed of Shor’s R-Algorithm

Shor’s r-algorithm is an iterative method for unconstrained optimization, designed for minimizing nonsmooth functions, for which its reported success has been considerable. Although some limited convergence results are known, nothing seems to be known about the algorithm’s rate of convergence, even in the smooth case. We study how the method behaves on convex quadratics, proving … Read more

Analysis of a Belgian Chocolate Stabilization Problem

We give a detailed numerical and theoretical analysis of a stabilization problem posed by V. Blondel in 1994. Our approach illustrates the effectiveness of a new gradient sampling algorithm for finding local optimizers of nonsmooth, nonconvex optimization problems arising in control, as well as the power of nonsmooth analysis for understanding variational problems involving polynomial … Read more

Variational Analysis of Functions of the Roots of Polynomials

The Gauss-Lucas Theorem on the roots of polynomials nicely simplifies calculating the subderivative and regular subdifferential of the abscissa mapping on polynomials (the maximum of the real parts of the roots). This paper extends this approach to more general functions of the roots. By combining the Gauss-Lucas methodology with an analysis of the splitting behavior … Read more