Steepest descent method for quasiconvex minimization on Riemannian manifolds

This paper extends the full convergence of the steepest descent algorithm with a generalized Armijo search and a proximal regularization to solve quasiconvex minimization problems defined on complete Riemannian manifolds. Previous convergence results are obtained as particular cases of our approach and some examples in non Euclidian spaces are given. CitationJ. Math. Anal. Appl. 341 … Read more

Discrete gradient method: a derivative free method for nonsmooth optimization

In this paper a new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can … Read more

Some remarks about the transformation of Charnes and Cooper

In this paper we extend in a simple way the transformation of Charnes and Cooper to the case where the functional ratio to be considered are of similar polynomial CitationUniversidad de San Luis Ejercito de Los Andes 950 San Luis(5700) ArgentinaArticleDownload View PDF

Mosco stability of proximal mappings in reflexive Banach spaces

In this paper we establish criteria for the stability of the proximal mapping \textrm{Prox} $_{\varphi }^{f}=(\partial \varphi +\partial f)^{-1}$ associated to the proper lower semicontinuous convex functions $\varphi $ and $f$ on a reflexive Banach space $X.$ We prove that, under certain conditions, if the convex functions $\varphi _{n}$ converge in the sense of Mosco … Read more

Proximal Point Methods for Quasiconvex and Convex Functions With Bregman Distances

This paper generalizes the proximal point method using Bregman distances to solve convex and quasiconvex optimization problems on noncompact Hadamard manifolds. We will proved that the sequence generated by our method is well defined and converges to an optimal solution of the problem. Also, we obtain the same convergence properties for the classical proximal method, … Read more

A Proximal Method for Identifying Active Manifolds

The minimization of an objective function over a constraint set can often be simplified if the “active manifold” of the constraints set can be correctly identified. In this work we present a simple subproblem, which can be used inside of any (convergent) optimization algorithm, that will identify the active manifold of a “prox-regular partly smooth” … Read more

Benchmark of Some Nonsmooth Optimization Solvers for Computing Nonconvex Proximal Points

The major focus of this work is to compare several methods for computing the proximal point of a nonconvex function via numerical testing. To do this, we introduce two techniques for randomly generating challenging nonconvex test functions, as well as two very specific test functions which should be of future interest to Nonconvex Optimization Benchmarking. … Read more

Numerical Experiments with universal barrier functions for cones of Chebyshev systems

Based on previous explicit computations of universal barrier functions, we describe numerical experiments for solving certain classes of convex optimization problems. The comparison is given of the performance of the classical affine-scaling algorithm with similar algorithm based upon the universal barrier function CitationTo appear in “Computational Optimization and Applications”ArticleDownload View PDF

Smooth minimization of two-stage stochastic linear programs

This note presents an application of the smooth optimization technique of Nesterov for solving two-stage stochastic linear programs. It is shown that the original O(1/e) bound of Nesterov on the number of main iterations required to obtain an e-optimal solution is retained. CitationTechnical Report, School of Industrial & Systems Engineering, Georgia Institute of Technology, 2006.ArticleDownload … Read more

Generating and Measuring Instances of Hard Semidefinite Programs, SDP

Linear Programming, LP, problems with finite optimal value have a zero duality gap and a primal-dual strictly complementary optimal solution pair. On the other hand, there exists Semidefinite Programming, SDP, problems which have a nonzero duality gap (different primal and dual optimal values; not both infinite). The duality gap is assured to be zero if … Read more