Developments of NEWUOA for unconstrained minimization without derivatives

The NEWUOA software is described briefly, with some numerical results that show good efficiency and accuracy in the unconstrained minimization without derivatives of functions of up to 320 variables. Some preliminary work on an extension of NEWUOA that allows simple bounds on the variables is also described. It suggests a variation of a technique in … Read more

Expressing Combinatorial Optimization Problems by Systems of Polynomial Equations and the Nullstellensatz

Systems of polynomial equations over the complex or real numbers can be used to model combinatorial problems. In this way, a combinatorial problem is feasible (e.g. a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution. In the first part of this paper, we construct new … Read more

Convergence Analysis of an Interior-Point Method for Nonconvex Nonlinear Programming

In this paper, we present global and local convergence results for an interior-point method for nonlinear programming. The algorithm uses an $\ell_1$ penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interior-point method [6]. … Read more

A view of algorithms for optimization without derivatives

Let the least value of a function of many variables be required. If its gradient is available, then one can tell whether search directions are downhill, and first order conditions help to identify the solution. It seems in practice, however, that the vast majority of unconstrained calculations do not employ any derivatives. A view of … Read more

An implicit trust-region method on Riemannian manifolds

We propose and analyze an “implicit” trust-region method in the general setting of Riemannian manifolds. The method is implicit in that the trust-region is defined as a superlevel set of the ratio of the actual over predicted decrease in the objective function. Since this method potentially requires the evaluation of the objective function at each … Read more

A polynomial predictor-corrector trust-region algorithm for linear programming

In this paper we present a scaling-invariant interior-point predictor-corrector type algorithm for linear programming (LP) whose iteration-complexity is polynomially bounded by the dimension and the logarithm of a certain condition number of the LP constraint matrix. At the predictor stage, the algorithm either takes the step along the standard affine scaling direction or a new … Read more

Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem

We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a self-concordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We … Read more

A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more

The Speed of Shor’s R-Algorithm

Shor’s r-algorithm is an iterative method for unconstrained optimization, designed for minimizing nonsmooth functions, for which its reported success has been considerable. Although some limited convergence results are known, nothing seems to be known about the algorithm’s rate of convergence, even in the smooth case. We study how the method behaves on convex quadratics, proving … Read more

A Coordinate Gradient Descent Method for Nonsmooth Separable Minimization

We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with l_1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian … Read more