Sample Average Approximation for Stochastic Dominance Constrained Programs

In this paper we study optimization problems with second-order stochastic dominance constraints. This class of problems has been receiving increasing attention in the literature as it allows for the modeling of optimization problems where a risk-averse decision maker wants to ensure that the solution produced by the model dominates certain benchmarks. Here we deal with … Read more

A Combined Class of Self-Scaling and Modified Quasi-Newton Methods

Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain `better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method has global and superlinear convergence for convex functions. Numerical experiments with this class, using the … Read more

On the global convergence of interior-point nonlinear programming algorithms

Carathéodory’s lemma states that if we have a linear combination of vectors in R^n, we can rewrite this combination using a linearly independent subset. This result has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated theorem, in which we obtained new bounds for … Read more

On sequential optimality conditions for smooth constrained optimization

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counter-examples will be shown. Algorithmic consequences will be discussed. Article Download View On sequential optimality conditions for … Read more

Real-Time Optimization as a Generalized Equation

We establish results for the problem of tracking a time-dependent manifold arising in online nonlinear programming by casting this as a generalized equation. We demonstrate that if points along a solution manifold are consistently strongly regular, it is possible to track the manifold approximately by solving a linear complementarity problem (LCP) at each time step. … Read more

Starting-Point Strategies for an Infeasible Potential Reduction Method

We present two strategies for choosing a “hot” starting-point in the context of an infeasible Potential Reduction (PR) method for convex Quadratic Programming. The basic idea of both strategies is to select a preliminary point and to suitably scale it in order to obtain a starting point such that its nonnegative entries are sufficiently bounded … Read more

An inexact parallel splitting augmented Lagrangian method for large system of linear equations

Parallel iterative methods are power tool for solving large system of linear equations (LQs). The existing parallel computing research results are all most concentred to sparse system or others particular structure, and all most based on parallel implementing the classical relaxation methods such as Gauss-Seidel, SOR, and AOR methods e±ciently on multiprcessor systems. In this … Read more

Row by row methods for semidefinite programming

We present a row-by-row (RBR) method for solving semidefinite programming (SDP) problem based on solving a sequence of problems obtained by restricting the n-dimensional positive semidefinite constraint on the matrix X. By fixing any (n-1)-dimensional principal submatrix of X and using its (generalized) Schur complement, the positive semidefinite constraint is reduced to a simple second-order … Read more

Switching stepsize strategies for PDIP

In this chapter we present a primal-dual interior point algorithm for solving constrained nonlinear programming problems. Switching rules are implemented that aim at exploiting the merits and avoiding the drawbacks of three different merit functions. The penalty parameter is determined using an adaptive penalty strategy that ensures a descent property for the merit function. The … Read more

TRESNEI, a Matlab trust-region solver for systems of nonlinear equalities and inequalities

The Matlab implementation of a trust-region Gauss-Newton method for bound-constrained nonlinear least-squares problems is presented. The solver, called TRESNEI, is adequate for zero and small-residual problems and handles the solution of nonlinear systems of equalities and inequalities. The structure and the usage of the solver are described and an extensive numerical comparison with functions from … Read more