On Relaxing the Mangasarian-Fromovitz Constraint Qualification

For the classical nonlinear program two new relaxations of the Mangasarian-Fromovitz constraint qualification are discussed and their relationship with some standard constraint qualifications is examined. In particular, we establish the equivalence of one of these constraint qualifications with the recently suggested by Andreani et al. Constant rank of the subspace component constraint qualification. As an … Read more

Irreducible elements of the copositive cone

An element $A$ of the $n \times n$ copositive cone $\copos{n}$ is called irreducible with respect to the nonnegative cone~$\NNM{n}$ if it cannot be written as a nontrivial sum $A = C+N$ of a copositive matrix $C$ and an elementwise nonnegative matrix $N$. This property was studied by Baumert~\cite{Baumert65} who gave a characterisation of irreducible … Read more

Global convergence and the Powell singular function

The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In … Read more

Efficient Cardinality/Mean-Variance Portfolios

A number of variants of the classical Markowitz mean-variance optimization model for portfolio selection have been investigated to render it more realistic. Recently, it has been studied the imposition of a cardinality constraint, setting an upper bound on the number of active positions taken in the portfolio, in an attempt to improve its performance and … Read more

Non-Convex Mixed-Integer Nonlinear Programming: A Survey

A wide range of problems arising in practical applications can be formulated as Mixed-Integer Nonlinear Programs (MINLPs). For the case in which the objective and constraint functions are convex, some quite effective exact and heuristic algorithms are available. When non-convexities are present, however, things become much more difficult, since then even the continuous relaxation is … Read more

An aggressive reduction scheme for the simple plant location problem

Pisinger et al. introduced the concept of `aggressive reduction’ for large-scale combinatorial optimisation problems. The idea is to spend much time and effort in reducing the size of the instance, in the hope that the reduced instance will then be small enough to be solved by an exact algorithm. We present an aggressive reduction scheme … Read more

Sparse/Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory

Piecewise linear quadratic (PLQ) penalties play a crucial role in many applications, including machine learning, robust statistical inference, sparsity promotion, and inverse problems such as Kalman smoothing. Well known examples of PLQ penalties include the l2, Huber, l1 and Vapnik losses. This paper builds on a dual representation for PLQ penalties known from convex analysis. … Read more

A regularized simplex method

In case of a special problem class, the simplex method can be implemented as a cutting-plane method that approximates a certain convex polyhedral objective function. In this paper we consider a regularized version of this cutting-plane method, and interpret the resulting procedure as a regularized simplex method. (Regularization is performed in the dual space and … Read more

Competitive location in networks with threshold-sensitive customer behaviour

We consider the (r|X_p)-medianoid problem in networks. Goods are assumed to be essential and the only decision criterion is the travel distance. The portion of demand captured by the competitors is modelled by a general capture function which includes the binary, partially binary and proportional customer choice rules as specific cases. We prove that, under … Read more

A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical … Read more