A very simple analysis of higher order liftings for binary problems

Based on the observation that the max-cut-polytope is the projection of a higher-dimensional regular simplex, and the fact that this simplex coincides with the $n$-th semidefinite lifting, a simple proof is given that the $n$-th lifting for the max-cut polytope is exact. Two approaches to reduce the dimension of higher order liftings conclude this short … Read more

Variance Reduction of Stochastic Gradients Without Full Gradient Evaluation

A standard concept for reducing the variance of stochastic gradient approximations is based on full gradient evaluations every now and then. In this paper an approach is considered that — while approximating a local minimizer of a sum of functions — also generates approximations of the gradient and the function values without relying on full … Read more

Best case exponential running time of a branch-and-bound algorithm using an optimal semidefinite relaxation

Chvatal (1980) has given a simple example of a knapsack problem for which a branch-and-bound algorithm using domination and linear relaxations to eliminate subproblems will use an exponential number of steps in the best case. In this short note it is shown that Chvatals result remains true when the LP relaxation is replaced with a … Read more

Set-Completely-Positive Representations and Cuts for the Max-Cut Polytope and the Unit Modulus Lifting

This paper considers a generalization of the “max-cut-polytope” $\conv\{\ xx^T\mid x\in\real^n, \ \ |x_k| = 1 \ \hbox{for} \ 1\le k\le n\}$ in the space of real symmetric $n\times n$-matrices with all-ones-diagonal to a complex “unit modulus lifting” $\conv\{xx\HH\mid x\in\complex^n, \ \ |x_k| = 1 \ \hbox{for} \ 1\le k\le n\}$ in the space of … Read more

On Affine Invariant Descent Directions

This paper explores the existence of affine invariant descent directions for unconstrained minimization. While there may exist several affine invariant descent directions for smooth functions $f$ at a given point, it is shown that for quadratic functions there exists exactly one invariant descent direction in the strictly convex case and generally none in the nondegenerate … Read more

A Derivative-Free and Ready-to-Use NLP Solver for Matlab or Octave

This paper introduces a derivative-free and ready-to-use solver for nonlinear programs with nonlinear equality and inequality constraints (NLPs). Using finite differences and a sequential quadratic programming (SQP) approach, the algorithm aims at finding a local minimizer and no extra attempt is made to generate a globally optimal solution. Due to the use of finite differences, … Read more

The solution of Euclidean norm trust region SQP subproblems via second order cone programs, an overview and elementary introduction

It is well known that convex SQP subproblems with a Euclidean norm trust region constraint can be reduced to second order cone programs for which the theory of Euclidean Jordan-algebras leads to efficient interior-point algorithms. Here, a brief and self-contained outline of the principles of such an implementation is given. All identities relevant for the … Read more

A priori bounds on the condition numbers in interior-point methods

Interior-point methods are known to be sensitive to ill-conditioning and to scaling of the data. This paper presents new asymptotically sharp bounds on the condition numbers of the linear systems at each iteration of an interior-point method for solving linear or semidefinite programs and discusses a stopping test which leads to a problem-independent “a priori” … Read more

Calibration by Optimization Without Using Derivatives

Applications in engineering frequently require the adjustment of certain parameters. While the mathematical laws that determine these parameters often are well understood, due to time limitations in every day industrial life, it is typically not feasible to derive an explicit computational procedure for adjusting the parameters based on some given measurement data. This paper aims … Read more

Simple examples for the failure of Newton’s method with line search for strictly convex minimization

In this paper two simple examples of a twice continuously differentiable strictly convex function $f$ are presented for which Newton’s method with line search converges to a point where the gradient of $f$ is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex … Read more