A filter method with unified step computation for nonlinear optimization

We present a filter linesearch method for solving general nonlinear and nonconvex optimization problems. The method is of the filter variety, but uses a robust (always feasible) subproblem based on an exact penalty function to compute a search direction. This contrasts traditional filter methods that use a (separate) restoration phase designed to reduce infeasibility until … Read more

Acceleration and Stabilization Techniques for Column Generation Applied to Capacitated Resource Management Problems

This research presents a very efficient method of solving a broad class of large-scale capacitated resource management problems by introducing a new formulation and decomposition. A heuristic called Likelihood of Assignment is utilized not only to find high quality initial integer feasible solutions, but also to guide the Branch-and-Price (B&P) Algorithm towards stabilization. Although Column … Read more

An Adaptive Augmented Lagrangian Method for Large-Scale Constrained Optimization

We propose an augmented Lagrangian algorithm for solving large-scale constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented … Read more

A Globally Convergent Primal-Dual Active-Set Framework for Large-Scale Convex Quadratic Optimization

We present a primal-dual active-set framework for solving large-scale convex quadratic optimization problems (QPs). In contrast to classical active-set methods, our framework allows for multiple simultaneous changes in the active- set estimate, which often leads to rapid identification of the optimal active-set regardless of the initial estimate. The iterates of our framework are the active-set … Read more

A randomized Mirror-Prox method for solving structured large-scale matrix saddle-point problems

In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov’s Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems … Read more

Dynamic Graph Generation for Large Scale Operational Train Timetabling

The aim of operational train timetabling is to find a conflict free timetable for a set of passenger and freight trains with predefined stopping time windows along given routes in an infrastructure network so that station capacities and train dependent running and headway times are observed. Typical models for this problem are based on time-discretized … Read more

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447–3475, 2010), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to … Read more

A Perry Descent Conjugate Gradient Method with Restricted Spectrum

A new nonlinear conjugate gradient method, based on Perry’s idea, is presented. And it is shown that its sufficient descent property is independent of any line search and the eigenvalues of $P_{k+1}^{\T}P_{k+1}$ are bounded above, where $P_{k+1}$ is the iteration matrix of the new method. Thus, the global convergence is proven by the spectral analysis … Read more

Recursive formulation of limited memory variable metric methods

In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately $4 m n$ … Read more

Band preconditioners for the matrix-free truncated Newton method

This report is devoted to preconditioning techniques for the matrix-free truncated Newton method. After a review of basic known pproaches, we propose ew results concerning tridiagonal and pentadiagonal preconditioners based on the standard BFGS updates and on numerical differentiation. Furthermore, we present results of extensive numerical experiments serving for the careful comparison of suitable preconditioning … Read more