Global convergence of an augmented Lagrangian method for nonlinear programming via Riemannian optimization

Considering a standard nonlinear programming problem, one may view a subset of the equality constraints as an embedded Riemannian manifold. In this paper we investigate the differences between the Euclidean and the Riemannian approach for this problem. It is well known that the linear independence constraint qualification for both approaches are equivalent. However, when considering … Read more

Constraint qualifications and strong global convergence properties of an augmented Lagrangian method on Riemannian manifolds

In the past years, augmented Lagrangian methods have been successfully applied to several classes of non-convex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have … Read more

Conditional gradient method for multiobjective optimization

We analyze the conditional gradient method, also known as Frank-Wolfe method, for constrained multiobjective optimization. The constraint set is assumed to be convex and compact, and the objectives functions are assumed to be continuously differentiable. The method is considered with different strategies for obtaining the step sizes. Asymptotic convergence properties and iteration-complexity bounds with and … Read more

How to project onto extended second order cones

The extended second order cones were introduced by S. Z. Németh and G. Zhang in [S. Z. Németh and G. Zhang. Extended Lorentz cones and variational inequalities on cylinders. J. Optim. Theory Appl., 168(3):756-768, 2016] for solving mixed complementarity problems and variational inequalities on cylinders. R. Sznajder in [R. Sznajder. The Lyapunov rank of extended … Read more

A robust Kantorovich’s theorem on inexact Newton method with relative residual error tolerance

We prove that under semi-local assumptions, the inexact Newton method with a fixed relative residual error tolerance converges Q-linearly to a zero of the non-linear operator under consideration. Using this result we show that Newton method for minimizing a self-concordant function or to find a zero of an analytic function can be implemented with a … Read more

Dini Derivative and a Characterization for Lipschitz and Convex Functions on Riemannian Manifolds

Dini derivative on Riemannian manifold setting is studied in this paper. In addition, a characterization for Lipschitz and convex functions defined on Riemannian manifolds and sufficient optimality conditions for constraint optimization problems in terms of the Dini derivative are given. Article Download View Dini Derivative and a Characterization for Lipschitz and Convex Functions on Riemannian … Read more

On the Convergence of the Entropy-Exponential Penalty Trajectories and Generalized Proximal Point Methods in Semidefinite Optimization

The convergence of primal and dual central paths associated to entropy and exponential functions, respectively, for semidefinite programming problem are studied in this paper. As an application, the proximal point method with the Kullback-Leibler distance applied to semidefinite programming problems is considered, and the convergence of primal and dual sequences is proved. Citation Journal of … Read more

Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds

The relationships among central path in the context of semidefinite programming, generalized proximal point method and Cauchy trajectory in Riemannian manifolds is studied in this paper. First it is proved that the central path associated to the general function is well defined. The convergence and characterization of its limit point is established for functions satisfying … Read more

Kantorovich’s Majorants Principle for Newton’s Method

We prove Kantorovich’s theorem on Newton’s method using a convergence analysis which makes clear, with respect to Newton’s Method, the relationship of the majorant function and the non-linear operator under consideration. This approach enable us to drop out the assumption of existence of a second root for the majorant function, still guaranteeing Q-quadratic convergence rate … Read more

Dual Convergence of the Proximal Point Method with Bregman Distances for Linear Programming

In this paper we consider the proximal point method with Bregman distance applied to linear programming problems, and study the dual sequence obtained from the optimal multipliers of the linear constraints of each subproblem. We establish the convergence of this dual sequence, as well as convergence rate results for the primal sequence, for a suitable … Read more