Lagrangian relaxation

Lagrangian relaxation is a tool to find upper bounds on a given (arbitrary) maximization problem. Sometimes, the bound is exact and an optimal solution is found. Our aim in this paper is to review this technique, the theory behind it, its numerical aspects, its relation with other techniques such as column generation. Citationin: Computational Combinatorial … Read more

Dynamic Weighted Aggregation for Evolutionary Multiobjective Optimization

Weighted sum based approaches to multiobjective optimization is computationally very efficient. However,they have two main weakness: 1) Only one Pareto solution can be obtained in one run 2) The solutions in the concave part of the Pareto front cannot be obtained. This paper proposes a new theory on multiobjective optimization using weighted aggregation approach. Based … Read more

A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-rank Factorization

In this paper, we present a nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form. The algorithm’s distinguishing feature is a change of variables that replaces the symmetric, positive semidefinite variable X of the SDP with a rectangular variable R according to the factorization X = RR^T. The rank of the factorization, i.e., … Read more

Assessing the Potential of Interior Methods for Nonlinear Optimization

A series of numerical experiments with interior point (LOQO, KNITRO) and active-set SQP codes (SNOPT, filterSQP) are reported and analyzed. The tests were performed with small, medium-size and moderately large problems, and are examined by problem classes. Detailed observations on the performance of the codes, and several suggestions on how to improve them are presented. … Read more

A Bundle Method to Solve Multivalued Variational Inequalities

In this paper we present a bundle method for solving a generalized variational inequality problem. This problem consists in finding a zero of the sum of two multivalued operators defined on a real Hilbert space. The first one is monotone and the second one is the subdifferential of a lower semicontinuous proper convex function. The … Read more

Improving complexity of structured convex optimization problems using self-concordant barriers

The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of self-concordant functions developed in [2]. We describe the classical short-step interior-point method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of … Read more

A conic formulation for hBcnorm optimization

In this paper, we formulate the $l_p$-norm optimization problem as a conic optimization problem, derive its standard duality properties and show it can be solved in polynomial time. We first define an ad hoc closed convex cone, study its properties and derive its dual. This allows us to express the standard $l_p$-norm optimization primal problem … Read more

Exploiting Sparsity in Semidefinite Programming via Matrix Completion II: Implementation and Numerical Results

In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs) when solving them by primal-dual interior-point methods. This framework is based on some results about positive semidefinite matrix completion, and it can be embodied … Read more