An interior-point method for minimizing the sum of piecewise-linear convex functions

We consider the problem to minimize the sum of piecewise-linear convex functions under both linear and nonnegative constraints. We convert the piecewise-linear convex problem into a standard form linear programming problem (LP) and apply a primal-dual interior-point method for the LP. From the solution of the converted problem, we can obtain the solution of the … Read more

A concave optimization-based approach for sparse portfolio selection

This paper considers a portfolio selection problem in which portfolios with minimum number of active assets are sought. This problem is motivated by the need of inducing sparsity on the selected portfolio to reduce transaction costs, complexity of portfolio management, and instability of the solution. The resulting problem is a difficult combinatorial problem. We propose … Read more

^phBcnorms, log-barriers and Cramer transform in optimization

We show that the Laplace approximation of a supremum by $L^p$-norms has interesting consequences in optimization. For instance, the logarithmic barrier functions (LBF) of a primal convex problem $P$ and its dual $P^*$ appear naturally when using this simple approximation technique for the value function $g$ of $P$ or its Legendre-Fenchel conjugate $g^*$. In addition, … Read more

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

We consider sequential quadratic programming (SQP) methods for solving constrained nonlinear programming problems. It is generally believed that SQP methods are sensitive to the accuracy by which partial derivatives are provided. One reason is that differences of gradients of the Lagrangian function are used for updating a quasi-Newton matrix, e.g., by the BFGS formula. The … Read more

A Feasible Directions Method for Nonsmooth Convex Optimization

We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and … Read more

On Duality Gap in Binary Quadratic Programming

We present in this paper new results on the duality gap between the binary quadratic optimization problem and its Lagrangian dual or semidefinite programming relaxation. We first derive a necessary and sufficient condition for the zero duality gap and discuss its relationship with the polynomial solvability of the primal problem. We then characterize the zeroness … Read more

On the convergence of a wide range of trust region methods for unconstrained optimization

We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates … Read more

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

A class of trust-region methods for large scale bound-constrained systems of nonlinear equations is presented. The methods in this class follow the so called affine-scaling approach and can efficiently handle large scale problems. At each iteration, a suitably scaled region around the current approximate solution is defined and, within such a region, the norm of … Read more

Constrained Dogleg Methods for nonlinear systems with simple bounds

We focus on the numerical solution of medium scale bound-constrained systems of nonlinear equations. In this context, we consider an affine-scaling trust region approach that allows a great flexibility in choosing the scaling matrix used to handle the bounds. The method is based on a dogleg procedure tailored for constrained problems and so, it is … Read more

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on … Read more