DIRECT SEARCH ALGORITHMS OVER RIEMANNIAN MANIFOLDS

We generalize the Nelder-Mead simplex and LTMADS algorithms and, the frame based methods for function minimization to Riemannian manifolds. Examples are given for functions defined on the special orthogonal Lie group $\mathcal{SO}(n)$ and the Grassmann manifold $\mathcal{G}(n,k)$. Our main examples are applying the generalized LTMADS algorithm to equality constrained optimization problems and, to the Whitney … Read more

EQUALITY CONSTRAINTS, RIEMANNIAN MANIFOLDS AND DIRECT SEARCH METHODS

We present a general procedure for handling equality constraints in optimization problems that is of particular use in direct search methods. First we will provide the necessary background in differential geometry. In particular, we will see what a Riemannian manifold is, what a tangent space is, how to move over a manifold and how to … Read more

DIRECT SEARCH METHODS OVER LIPSCHITZ MANIFOLDS

We extend direct search methods to optimization problems that include equality constraints given by Lipschitz functions. The equality constraints are assumed to implicitly define a Lipschitz manifold. Numerically implementing the inverse (implicit) function theorem allows us to define a new problem on the tangent spaces of the manifold. We can then use a direct search … Read more

A SIMPLICIAL CONTINUATION DIRECT SEARCH METHOD

A direct search method for the class of problems considered by Lewis and Torczon [\textit{SIAM J. Optim.}, 12 (2002), pp. 1075-1089] is developed. Instead of using an augmented Lagrangian method, a simplicial approximation method to the feasible set is implicitly employed. This allows the points our algorithm considers to conveniently remain within an \textit{a priori} … Read more

A globally convergent trust-region SQP method without a penalty function for nonlinearly constrained optimization

In this paper, we propose a new trust-region SQP method, which uses no penalty function, for solving nonlinearly constrained optimization problem. Our method consists of alternate two phases. Specifically, we alternately proceed the feasibility restoration phase and the objective function minimization phase. The global convergence property of the proposed method is shown. CitationCooperative Research Report … Read more

A primal-dual interior point method for nonlinear semidefinite programming

In this paper, we consider a primal-dual interior point method for solving nonlinear semidefinite programming problems. By combining the primal barrier penalty function and the primal-dual barrier function, a new primal-dual merit function is proposed within the framework of the line search strategy. We show the global convergence property of our method. Finally some numerical … Read more

Convergence Analysis of an Interior-Point Method for Nonconvex Nonlinear Programming

In this paper, we present global and local convergence results for an interior-point method for nonlinear programming. The algorithm uses an $\ell_1$ penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interior-point method [6]. … Read more

An implicit trust-region method on Riemannian manifolds

We propose and analyze an “implicit” trust-region method in the general setting of Riemannian manifolds. The method is implicit in that the trust-region is defined as a superlevel set of the ratio of the actual over predicted decrease in the objective function. Since this method potentially requires the evaluation of the objective function at each … Read more

Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem

We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a self-concordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We … Read more

A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more