LANCELOt_simple, a simple interface to LANCELOT B

We describe LANCELOT_simple, an interface to the LANCELOT B nonlinear optimization package within the GALAHAD} library (Gould, Orban, Toint, 2003) which ignores problem structure. The result is an easy-to-use Fortran 90 subroutine, with a small number of intuitively interpretable arguments. However, since structure is ignored, the means of presenting problems to the solver limited and … Read more

Multi-Standard Quadratic Optimization Problems

A Standard Quadratic Optimization Problem (StQP) consists of maximizing a (possibly indefinite) quadratic form over the standard simplex. Likewise, in a multi-StQP we have to maximize a (possibly indefinite) quadratic form over the cartesian product of several standard simplices (of possibly different dimensions). Two converging monotone interior point methods are established. Further, we prove an … Read more

A First-Order Interior-Point Method for Linearly Constrained Smooth Optimization

We propose a first-order interior-point method for linearly constrained smooth optimization that unifies and extends first-order affine-scaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case of quadratic programs, (sub)linear convergence rate and iterate convergence results are derived. Numerical experience on simplex constrained problems with 1000 variables is reported. … Read more

Iterative methods for finding a trust-region step

We consider the problem of finding an approximate minimizer of a general quadratic function subject to a two-norm constraint. The Steihaug-Toint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. The benefit of this approach is that an approximate solution … Read more

Improved Approximation Bound for Quadratic Optimization Problems with Orthogonality Constraints

In this paper we consider approximation algorithms for a class of quadratic optimization problems that contain orthogonality constraints, i.e. constraints of the form $X^TX=I$, where $X \in {\mathbb R}^{m \times n}$ is the optimization variable. Such class of problems, which we denote by (QP-OC), is quite general and captures several well–studied problems in the literature … Read more

Multi-Secant Equations, Approximate Invariant Subspaces and Multigrid Optimization

New approximate secant equations are shown to result from the knowledge of (problem dependent) invariant subspace information, which in turn suggests improvements in quasi-Newton methods for unconstrained minimization. It is also shown that this type of information may often be extracted from the multigrid structure of discretized infinite dimensional problems. A new limited-memory BFGS using … Read more

Support Vector Regression for imprecise data

In this work, a regression problem is studied where the elements of the database are sets with certain geometrical properties. In particular, our model can be applied to handle data affected by some kind of noise or uncertainty and interval-valued data, and databases with missing values as well. The proposed formulation is based on the … Read more

Two theoretical results for sequential semidefinite programming

We examine the local convergence of a sequential semidefinite programming approach for solving nonlinear programs with nonlinear semidefiniteness constraints. Known convergence results are extended to slightly weaker second order sufficient conditions and the resulting subproblems are shown to have local convexity properties that imply a weak form of self-concordance of the barrier subproblems. CitationPreprint, Mathematisches … Read more

Adaptive Constraint Reduction for Training Support Vector Machines

A support vector machine (SVM) determines whether a given observed pattern lies in a particular class. The decision is based on prior training of the SVM on a set of patterns with known classification, and training is achieved by solving a convex quadratic programming problem. Since there are typically a large number of training patterns, … Read more

A Retrospective Trust-Region Method for Unconstrained Optimization

We introduce a new trust-region method for unconstrained optimization where the radius update is computed using the model information at the current iterate rather than at the preceding one. The update is then performed according to how well the current model retrospectively predicts the value of the objective function at last iterate. Global convergence to … Read more