A survey of the S-lemma

In this survey we review the many faces of the S-lemma, a result about the correctness of the S-procedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry and linear algebra as well. These were active research areas, but as … Read more

Newton-KKT Interior-Point Methods for Indefinite Quadratic Programming

Two interior-point algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (much like in the case of primal-dual algorithms for linear programming) search directions for the `primal´ variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton (or quasi-Newton) … Read more

A shifted Steihaug-Toint method for computing a trust-region step.

Trust-region methods are very convenient in connection with the Newton method for unconstrained optimization. The More-Sorensen direct method and the Steihaug-Toint iterative method are most commonly used for solving trust-region subproblems. We propose a method which combines both of these approaches. Using the small-size Lanczos matrix, we apply the More-Sorensen method to a small-size trust-region … Read more

Nonlinear-Programming Reformulation of the Order-Value Optimization problem

Order-value optimization (OVO) is a generalization of the minimax problem motivated by decision-making problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium constraints is deduced. The relation between OVO and this nonlinear-programming reformulation is studied. Particular attention is given to … Read more

Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification

Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under … Read more

SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: INVARIANT SUPPORT SET INTERVAL

In sensitivity analysis one wants to know how the problem and the optimal solutions change under the variation of the input data. We consider the case when variation happens in the right hand side of the constraints and/or in the linear term of the objective function. We are interested to find the range of the … Read more

Performance of CONDOR, a Parallel, Constrained extension of Powell’s UOBYQA algorithm. Experimental results and comparison with the DFO algorithm.

This paper presents an algorithmic extension of Powell’s UOBYQA algorithm (”Unconstrained Optimization BY Quadratical Approximation”). We start by summarizing the original algorithm of Powell and by presenting it in a more comprehensible form. Thereafter, we report comparative numerical results between UOBYQA, DFO and a parallel, constrained extension of UOBYQA that will be called in the … Read more

Sensitivity of trust-region algorithms on their parameters

In this paper, we examine the sensitivity of trust-region algorithms on the parameters related to the step acceptance and update of the trust region. We show, in the context of unconstrained programming, that the numerical efficiency of these algorithms can easily be improved by choosing appropriate parameters. Recommanded ranges of values for these parameters are … Read more

Recursive Trust-Region Methods for Multilevel Nonlinear Optimization (Part I): Global Convergence and Complexity

A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a mean of speeding up the computation of the step. This use is recursive, leading to … Read more

A NEW SELF-CONCORDANT BARRIER FOR THE HYPERCUBE

In this paper we introduce a new barrier function $\sum\limits_{i=1}^n(2x_i-1)[\ln{x_i}-\ln(1-x_i)]$ to solve the following optimization problem: $\min\,\, f(x)$ subject to: $Ax=b;\;\;0\leq x\leq e$. We show that this function is a $(3/2)n$-self-concordant barrier on the hypercube $[0,1]^n$. We prove that the central path is well defined and that under an additional assumption on the objective function, … Read more