A concave optimization-based approach for sparse portfolio selection

This paper considers a portfolio selection problem in which portfolios with minimum number of active assets are sought. This problem is motivated by the need of inducing sparsity on the selected portfolio to reduce transaction costs, complexity of portfolio management, and instability of the solution. The resulting problem is a difficult combinatorial problem. We propose … Read more

^phBcnorms, log-barriers and Cramer transform in optimization

We show that the Laplace approximation of a supremum by $L^p$-norms has interesting consequences in optimization. For instance, the logarithmic barrier functions (LBF) of a primal convex problem $P$ and its dual $P^*$ appear naturally when using this simple approximation technique for the value function $g$ of $P$ or its Legendre-Fenchel conjugate $g^*$. In addition, … Read more

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

We consider sequential quadratic programming (SQP) methods for solving constrained nonlinear programming problems. It is generally believed that SQP methods are sensitive to the accuracy by which partial derivatives are provided. One reason is that differences of gradients of the Lagrangian function are used for updating a quasi-Newton matrix, e.g., by the BFGS formula. The … Read more

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on … Read more

A sufficiently exact inexact Newton step based on reusing matrix information

Newton’s method is a classical method for solving a nonlinear equation $F(z)=0$. We derive inexact Newton steps that lead to an inexact Newton method, applicable near a solution. The method is based on solving for a particular $F'(z_{k’})$ during $p$ consecutive iterations $k=k’,k’+1,\dots,k’+p-1$. One such $p$-cycle requires $2^p-1$ solves with the matrix $F'(z_{k’})$. If matrix … Read more

MathOptimizer: A nonlinear optimization package for Mathematica users

Mathematica is an advanced software system that enables symbolic computing, numerics, program code development, model visualization and professional documentation in a unified framework. Our MathOptimizer software package serves to solve global and local optimization models developed using Mathematica. We introduce MathOptimizer’s key features and discuss its usage options that support a range of operational modes. … Read more

An Updated Set of 306 Test Problems for Nonlinear Programming with Validated Optimal Solutions

The availability of nonlinear programming test problems is extremely important to test optimization codes or to develop new algorithms. We describe the usage of the Fortran subroutines for all 306 test problems of two previous collections of the author, see Hock and Schittkowski (1981) and Schittkowski (1987). For each test example, we provide an optimal … Read more

An L1 Elastic Interior-Point Method for Mathematical Programs with Complementarity Constraints

We propose an interior-point algorithm based on an elastic formulation of the L1-penalty merit function for mathematical programs with complementarity constraints. The method generalizes that of Gould, Orban and Toint (2003) and naturally converges to a strongly stationary point or delivers a certificate of degeneracy without recourse to second-order intermediate solutions. Remarkably, the method allows … Read more

PARNES: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals

In this article we propose an algorithm, NESTA-LASSO, for the LASSO problem (i.e., an underdetermined linear least-squares problem with a one-norm constraint on the solution) that exhibits linear convergence under the restricted isometry property (RIP) and some other reasonable assumptions. Inspired by the state-of-the-art sparse recovery method, NESTA, we rely on an accelerated proximal gradient … Read more

Nonmonotone Filter Method for Nonlinear Optimization

We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we … Read more