Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models

Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of $O(\epsilon^{-3/2})$ proved by Cartis, Gould and Toint (IMAJNA 32(4) 2012, pp.1662-1695) for computing an $\epsilon$-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are … Read more

Polynomial SDP Cuts for Optimal Power Flow

The use of convex relaxations has lately gained considerable interest in Power Systems. These relaxations play a major role in providing quality guarantees for non-convex optimization problems. For the Optimal Power Flow (OPF) prob- lem, the semidefinite programming (SDP) relaxation is known to produce tight lower bounds. Unfortunately, SDP solvers still suffer from a lack … Read more

On the convergence rate of grid search for polynomial optimization over the simplex

We consider the approximate minimization of a given polynomial on the standard simplex, obtained by taking the minimum value over all rational grid points with given denominator ${r} \in \mathbb{N}$. It was shown in [De Klerk, E., Laurent, M., Sun, Z.: An error analysis for polynomial optimization over the simplex based on the multivariate hypergeometric … Read more

Algorithms for the power-$ Steiner tree problem in the Euclidean plane

We study the problem of constructing minimum power-$p$ Euclidean $k$-Steiner trees in the plane. The problem is to find a tree of minimum cost spanning a set of given terminals where, as opposed to the minimum spanning tree problem, at most $k$ additional nodes (Steiner points) may be introduced anywhere in the plane. The cost … Read more

Sequential equality-constrained optimization for nonlinear programming

A new method is proposed for solving optimization problems with equality constraints and bounds on the variables. In the spirit of Sequential Quadratic Programming and Sequential Linearly-Constrained Programming, the new method approximately solves, at each iteration, an equality-constrained optimization problem. The bound constraints are handled in outer iterations by means of an Augmented Lagrangian scheme. … Read more

A basis-free null space method for solving generalized saddle point problems

Using an augmented Lagrangian matrix approach, we analytically solve in this paper a broad class of linear systems that includes symmetric and nonsymmetric problems in saddle point form. To this end, some mild assumptions are made and a preconditioning is specially designed to improve the sensitivity of the systems before the calculation of their solutions. … Read more

The solution of Euclidean norm trust region SQP subproblems via second order cone programs, an overview and elementary introduction

It is well known that convex SQP subproblems with a Euclidean norm trust region constraint can be reduced to second order cone programs for which the theory of Euclidean Jordan-algebras leads to efficient interior-point algorithms. Here, a brief and self-contained outline of the principles of such an implementation is given. All identities relevant for the … Read more

A polynomially solvable case of the pooling problem

Answering a question of Haugland, we show that the pooling problem with one pool and a bounded number of inputs can be solved in polynomial time by solving a polynomial number of linear programs of polynomial size. We also give an overview of known complexity results and remaining open problems to further characterize the border … Read more

A DERIVATIVE-FREE APPROACH TO CONSTRAINED MULTIOBJECTIVE NONSMOOTH OPTIMIZATION

In this work, we consider multiobjective optimization problems with both bound constraints on the variables and general nonlinear constraints, where objective and constraint function values can only be obtained by querying a black box. We define a linesearch-based solution method, and we show that it converges to a set of Pareto stationary points. To this … Read more

Bounded perturbation resilience of projected scaled gradient methods

We investigate projected scaled gradient (PSG) methods for convex minimization problems. These methods perform a descent step along a diagonally scaled gradient direction followed by a feasibility regaining step via orthogonal projection onto the constraint set. This constitutes a generalized algorithmic structure that encompasses as special cases the gradient projection method, the projected Newton method, … Read more