Some remarks about the transformation of Charnes and Cooper

In this paper we extend in a simple way the transformation of Charnes and Cooper to the case where the functional ratio to be considered are of similar polynomial CitationUniversidad de San Luis Ejercito de Los Andes 950 San Luis(5700) ArgentinaArticleDownload View PDF

Mosco stability of proximal mappings in reflexive Banach spaces

In this paper we establish criteria for the stability of the proximal mapping \textrm{Prox} $_{\varphi }^{f}=(\partial \varphi +\partial f)^{-1}$ associated to the proper lower semicontinuous convex functions $\varphi $ and $f$ on a reflexive Banach space $X.$ We prove that, under certain conditions, if the convex functions $\varphi _{n}$ converge in the sense of Mosco … Read more

Proximal Point Methods for Quasiconvex and Convex Functions With Bregman Distances

This paper generalizes the proximal point method using Bregman distances to solve convex and quasiconvex optimization problems on noncompact Hadamard manifolds. We will proved that the sequence generated by our method is well defined and converges to an optimal solution of the problem. Also, we obtain the same convergence properties for the classical proximal method, … Read more

A Proximal Method for Identifying Active Manifolds

The minimization of an objective function over a constraint set can often be simplified if the “active manifold” of the constraints set can be correctly identified. In this work we present a simple subproblem, which can be used inside of any (convergent) optimization algorithm, that will identify the active manifold of a “prox-regular partly smooth” … Read more

Benchmark of Some Nonsmooth Optimization Solvers for Computing Nonconvex Proximal Points

The major focus of this work is to compare several methods for computing the proximal point of a nonconvex function via numerical testing. To do this, we introduce two techniques for randomly generating challenging nonconvex test functions, as well as two very specific test functions which should be of future interest to Nonconvex Optimization Benchmarking. … Read more

Numerical Experiments with universal barrier functions for cones of Chebyshev systems

Based on previous explicit computations of universal barrier functions, we describe numerical experiments for solving certain classes of convex optimization problems. The comparison is given of the performance of the classical affine-scaling algorithm with similar algorithm based upon the universal barrier function CitationTo appear in “Computational Optimization and Applications”ArticleDownload View PDF

Smooth minimization of two-stage stochastic linear programs

This note presents an application of the smooth optimization technique of Nesterov for solving two-stage stochastic linear programs. It is shown that the original O(1/e) bound of Nesterov on the number of main iterations required to obtain an e-optimal solution is retained. CitationTechnical Report, School of Industrial & Systems Engineering, Georgia Institute of Technology, 2006.ArticleDownload … Read more

Generating and Measuring Instances of Hard Semidefinite Programs, SDP

Linear Programming, LP, problems with finite optimal value have a zero duality gap and a primal-dual strictly complementary optimal solution pair. On the other hand, there exists Semidefinite Programming, SDP, problems which have a nonzero duality gap (different primal and dual optimal values; not both infinite). The duality gap is assured to be zero if … Read more

Exact regularization of linear programs

We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed–differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. … Read more

Steplength Selection in Interior-Point Methods for Quadratic Programming

We present a new strategy for choosing primal and dual steplengths in a primal-dual interior-point algorithm for convex quadratic programming. Current implementations often scale steps equally to avoid increases in dual infeasibility between iterations. We propose that this method can be too conservative, while safeguarding an unequally-scaled steplength approach will often require fewer steps toward … Read more