An Efficient Interior-Point Method for Convex Multicriteria Optimization Problems

In multicriteria optimization, several objective functions, conflicting with each other, have to be minimized simultaneously. We propose a new efficient method for approximating the solution set of a multiobjective programming problem, where the objective functions involved are arbitary convex functions and the set of feasible points is convex. The method is based on generating warm-start … Read more

Convexification of Stochastic Ordering

We consider sets defined by the usual stochastic ordering relation and by the second order stochastic dominance relation. Under fairy general assumptions we prove that in the space of integrable random variables the closed convex hull of the first set is equal to the second set. ArticleDownload View PDF

Portfolio Optimization with Stochastic Dominance Constraints

We consider the problem of constructing a portfolio of finitely many assets whose returns are described by a discrete joint distribution. We propose a new portfolio optimization model involving stochastic dominance constraints on the portfolio return. We develop optimality and duality theory for these models. We construct equivalent optimization models with utility functions. Numerical illustration … Read more

Weak Stationarity: Eliminating the Gap between Necessary and Sufficient Conditions

Starting from known necessary extremality conditions in terms of strict subdifferentials and normals the notion of weak stationarity is introduced. It is defined in terms of initial space elements. The necessary conditions become necessary and sufficient (for stationarity). CitationSchool of Information Technology and Mathematical Sciences, Centre of Information and Applied Optimization, University of Ballarat, POB … Read more

When LP is not a good idea – using structure in polyhedral optimization problems

It has been known for almost 50 years that the discrete l_1 approximation problem can be solved effectively by linear programming. However, improved algorithms involve a step which can be interpreted as a line search, and which is not part of the standard LP solution procedures. l_1 provides the simplest example of a class of … Read more

Convergence rate estimates for the gradient differential inclusion

Let f be a lower semi-continuous convex function in a Euclidean space, finite or infinite dimensional. The gradient differential inclusion is a generalized differential equation which requires that -x'(t) be in the subgradient of f at x. It is the continuous versions of the gradient descent method for minimizing f in case f is differentiable, … Read more

Convergence of infeasible-interior-point methods for self-scaled conic programming

We present results on global and polynomial-time convergence of infeasible-interior-point methods for self-scaled conic programming, which includes linear and semidefinite programming. First, we establish global convergence for an algorithm using a wide neighborhood. Next, we prove polynomial complexity for the algorithm with a slightly narrower neighborhood. Both neighborhoods are related to the wide (minus infinity) … Read more

Lift-and-project for 0–1 programming via algebraic geometry

Recently, tools from algebraic geometry have been successfully applied to develop solution schemes for new classes of optimization problems. A central idea in these constructions is to express a polynomial that is positive on a given domain in terms of polynomials of higher degree so that its positivity is readily revealed. This resembles the “lifting” … Read more

A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization

Let $f$ be a continuous function on $\Rl^n$, and suppose $f$ is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which $f$ is not differentiable. Of particular interest is the case where $f$ is not convex, and perhaps not even locally Lipschitz, but … Read more