A Remarkable Property of the Dynamic Optimization Extremals

A dynamic optimization continuous problem poses the question of what is the optimal magnitude of a choice variable, at each point of time, in a given interval. To tackle such problems, three major approaches are available: dynamic programming; the calculus of variations; and the powerful optimal control approach. At the core of optimal control theory … Read more

Constrained Nonlinear Programming for Volatility Estimation with GARCH Models

The paper proposes a constrained Nonlinear Programming methodology for volatility estimation with GARCH models. These models are usually developed and solved as unconstrained optimization problems whereas they actually fit into nonlinear, nonconvex problems. Computational results on FTSE 100 and S & P 500 indices with up to 1500 data points are given and contrasted to … Read more

On the convergence of the central path in semidefinite optimization

The central path in linear optimization always converges to the analytic center of the optimal set. This result was extended to semidefinite programming by Goldfarb and Scheinberg (SIAM J. Optim. 8: 871-886, 1998). In this paper we show that this latter result is not correct in the absence of strict complementarity. We provide a counterexample, … Read more

Componentwise fast convergence in the solution of full-rank systems of nonlinear equations

The asymptotic convergence of parameterized variants of Newton’s method for the solution of nonlinear systems of equations is considered. The original system is perturbed by a term involving the variables and a scalar parameter which is driven to zero as the iteration proceeds. The exact local solutions to the perturbed systems then form a differentiable … Read more

A Computational Study of a Gradient-Based Log-Barrier Algorithm for a Class of Large-Scale SDPs

The authors of this paper recently introduced a transformation \cite{BuMoZh99-1} that converts a class of semidefinite programs (SDPs) into nonlinear optimization problems free of matrix-valued constraints and variables. This transformation enables the application of nonlinear optimization techniques to the solution of certain SDPs that are too large for conventional interior-point methods to handle efficiently. Based … Read more

Properties of the Log-Barrier Function on Degenerate Nonlinear Programs

We examine the sequence of local minimizers of the log-barrier function for a nonlinear program near a solution at which second-order sufficient conditions and the Mangasarian-Fromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. When a strict complementarity condition is satisfied, we show uniqueness of the local minimizer of … Read more

Automatic Differentiation Tools in Optimization Software

We discuss the role of automatic differentiation tools in optimization software. We emphasize issues that are important to large-scale optimization and that have proved useful in the installation of nonlinear solvers in the NEOS Server. Our discussion centers on the computation of the gradient and Hessian matrix for partially separable functions and shows that the … Read more

New Results on Quadratic Minimization

In this paper we present several new results on minimizing an indefinite quadratic function under quadratic/linear constraints. The emphasis is placed on the case where the constraints are two quadratic inequalities. This formulation is known as {\em the extended trust region subproblem}\/ and the computational complexity of this problem is still unknown. We consider several … Read more

Examples of ill-behaved central paths in convex optimization

This paper presents some examples of ill-behaved central paths in convex optimization. Some contain infinitely many fixed length central segments; others manifest oscillations with infinite variation. These central paths can be encountered even for infinitely differentiable data. Citation Rapport de recherche 4179, INRIA, France, 2001 Article Download View Examples of ill-behaved central paths in convex … Read more

On the Convergence of Newton Iterations to Non-Stationary Points

We study conditions under which line search Newton methods for nonlinear systems of equations and optimization fail due to the presence of singular non-stationary points. These points are not solutions of the problem and are characterized by the fact that Jacobian or Hessian matrices are singular. It is shown that, for systems of nonlinear equations, … Read more