Approximate Level Method

In this paper we propose and analyze a variant of the level method [4], which is an algorithm for minimizing nonsmooth convex functions. The main work per iteration is spent on 1) minimizing a piecewise-linear model of the objective function and on 2) projecting onto the intersection of the feasible region and a polyhedron arising … Read more

A Matrix-free Algorithm for Equality Constrained Optimization Problems with Rank-deficient Jacobians

We present a line search algorithm for large-scale constrained optimization that is robust and efficient even for problems with (nearly) rank-deficient Jacobian matrices. The method is matrix-free (i.e., it does not require explicit storage or factorizations of derivative matrices), allows for inexact step computations, and is applicable for nonconvex problems. The main components of the … Read more

On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods

The solution of KKT systems is ubiquitous in optimization methods and often dominates the computation time, especially when large-scale problems are considered. Thus, the effective implementation of such methods is highly dependent on the availability of effective linear algebra algorithms and software, that are able, in turn, to take into account specific needs of optimization. … Read more

Primal interior point method for minimization of generalized minimax functions

In this report, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. Next we describe the basic algorithm and give more details concerning … Read more

An Inexact Newton Method for Nonconvex Equality Constrained Optimization

We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the … Read more

Exploiting separability in large-scale linear support vector machine training

Linear support vector machine training can be represented as a large quadratic program. We present an efficient and numerically stable algorithm for this problem using interior point methods, which requires only O(n) operations per iteration. Through exploiting the separability of the Hessian, we provide a unified approach, from an optimization perspective, to 1-norm classification, 2-norm … Read more

A 2-BFGS updating in a trust region framework

We present a new matrix-free method for the trust region subproblem, assuming that the approximate Hessian is updated by the limited memory BFGS formula with m = 2. The resulting updating scheme, called 2-BFGS, give us the ability to determine via simple formulas the eigenvalues of the resulting approximation. Thus, at each iteration, we can … Read more

ASTRAL: An Active Set \inftyhBcTrust-Region Algorithm for Box Constrained Optimization

An algorithm for solving large-scale nonlinear optimization problems with simple bounds is described. The algorithm is an $\ell_\infty$-norm trust-region method that uses both active set identification techniques as well as limited memory BFGS updating for the Hessian approximation. The trust-region subproblems are solved using primal-dual interior point techniques that exploit the structure of the limited … Read more

New subroutines for large-scale optimization

We present fourteen basic FORTRAN subroutines for large-scale unconstrained and box constrained optimization and large-scale systems of nonlinear equations. Subroutines {\tt PLIS} and {\tt PLIP}, intended for dense general optimization problems, are based on limited-memory variable metric methods. Subroutine {\tt PNET}, also intended for dense general optimization problems, is based on an inexact truncated Newton … Read more

New class of limited-memory variationally-derived variable metric methods

A new family of limited-memory variationally-derived variable metric or quasi-Newton methods for unconstrained minimization is given. The methods have quadratic termination property and use updates, invariant under linear transformations. Some encouraging numerical experience is reported. CitationTechnical Report V-973. Prague, ICS AS CR 2006.ArticleDownload View PDF