A filter method with unified step computation for nonlinear optimization

We present a filter linesearch method for solving general nonlinear and nonconvex optimization problems. The method is of the filter variety, but uses a robust (always feasible) subproblem based on an exact penalty function to compute a search direction. This contrasts traditional filter methods that use a (separate) restoration phase designed to reduce infeasibility until … Read more

Incremental Accelerated Gradient Methods for SVM Classification: Study of the Constrained Approach

We investigate constrained first order techniques for training Support Vector Machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the … Read more

Mixed-Integer Linear Methods for Layout-Optimization of Screening Systems in Recovered Paper Production

The industrial treatment of waste paper in order to regain valuable fibers from which recovered paper can be produced, involves several steps of preparation. One important step is the separation of stickies that are normally attached to the paper. If not properly separated, remaining stickies reduce the quality of the recovered paper or even disrupt … Read more

Scalable Nonlinear Programming Via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact di erentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can eciently detect and exploit directions of negative curvature, it is superlinearly convergent, and … Read more

Optimality conditions for the nonlinear programming problems on Riemannian manifolds

In recent years, many traditional optimization methods have been successfully generalized to minimize objective functions on manifolds. In this paper, we first extend the general traditional constrained optimization problem to a nonlinear programming problem built upon a general Riemannian manifold $\mathcal{M}$, and discuss the first-order and the second-order optimality conditions. By exploiting the differential geometry … Read more

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixed-integer nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of \emph{stabilized} … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Primal-Dual Methods and Cubic Regularization

In this paper, we present a primal-dual interior-point method for solving nonlinear programming problems. It employs a Levenberg-Marquardt (LM) perturbation to the Karush-Kuhn-Tucker (KKT) matrix to handle indefinite Hessians and a line search to obtain sufficient descent at each iteration. We show that the LM perturbation is equivalent to replacing the Newton step by a … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Cubic Regularization

In this paper, we present a barrier method for solving nonlinear programming problems. It employs a Levenberg-Marquardt perturbation to the Karush-Kuhn-Tucker (KKT) matrix to handle indefinite Hessians and a line search to obtain sufficient descent at each iteration. We show that the Levenberg-Marquardt perturbation is equivalent to replacing the Newton step by a cubic regularization … Read more

Regularized Sequential Quadratic Programming

We present the formulation and analysis of a new sequential quadratic programming (\SQP) method for general nonlinearly constrained optimization. The method pairs a primal-dual generalized augmented Lagrangian merit function with a \emph{flexible} line search to obtain a sequence of improving estimates of the solution. This function is a primal-dual variant of the augmented Lagrangian proposed … Read more

Higher-Order Confidence Intervals for Stochastic Programming using Bootstrapping

We study the problem of constructing confidence intervals for the optimal value of a stochastic programming problem by using bootstrapping. Bootstrapping is a resampling method used in the statistical inference of unknown parameters for which only a small number of samples can be obtained. One such parameter is the optimal value of a stochastic optimization … Read more