On the Convergence of Successive Linear-Quadratic Programming Algorithms

The global convergence properties of a class of penalty methods for nonlinear programming are analyzed. These methods include successive linear programming approaches, and more specifically, the successive linear-quadratic programming approach presented by Byrd, Gould, Nocedal and Waltz (Math. Programming 100(1):27–48, 2004). Every iteration requires the solution of two trust-region subproblems involving piecewise linear and quadratic … Read more

An Algorithm for Perturbed Second-order Cone Programs

The second-order cone programming problem is reformulated into several new systems of nonlinear equations. Assume the perturbation of the data is in a certain neighborhood of zero. Then starting from a solution to the old problem, the semismooth Newton’s iterates converge Q-quadratically to a solution of the perturbed problem. The algorithm is globalized. Numerical examples … Read more

Convergence Analysis of the DIRECT Algorithm

The DIRECT algorithm is a deterministic sampling method for bound constrained Lipschitz continuous optimization. We prove a subsequential convergence result for the DIRECT algorithm that quantifies some of the convergence observations in the literature. Our results apply to several variations on the original method, including one that will handle general constraints. We use techniques from … Read more

Dynamic Bundle Methods

Lagrangian relaxation is a popular technique to solve difficult optimization problems. However, the applicability of this technique depends on having a relatively low number of hard constraints to dualize. When there are many hard constraints, it may be preferable to relax them dynamically, according to some rule depending on which multipliers are active. From the … Read more

Variational Analysis of Functions of the Roots of Polynomials

The Gauss-Lucas Theorem on the roots of polynomials nicely simplifies calculating the subderivative and regular subdifferential of the abscissa mapping on polynomials (the maximum of the real parts of the roots). This paper extends this approach to more general functions of the roots. By combining the Gauss-Lucas methodology with an analysis of the splitting behavior … Read more

Stationarity and Regularity of Set Systems

Extremality, stationarity and regularity notions for a system of closed sets in a normed linear space are investigated. The equivalence of different abstract “extremal” settings in terms of set systems and multifunctions is proved. The dual necessary and sufficient conditions of weak stationarity (the Extended extremal principle) are presented for the case of an Asplund … Read more

Solving large scale linear multicommodity flow problems with an active set strategy and Proximal-ACCPM

In this paper, we propose to solve the linear multicommodity flow problem using a partial Lagrangian relaxation. The relaxation is restricted to the set of arcs that are likely to be saturated at the optimum. This set is itself approximated by an active set strategy. The partial Lagrangian dual is solved with Proximal-ACCPM, a variant … Read more

Mean-risk objectives in stochastic programming

Traditional stochastic programming is risk neutral in the sense that it is concerned with the optimization of an expectation criteria. A common approach to addressing risk in decision making problems is to consider a weighted mean-risk criterion, where some dispersion statistic is used as a measure of risk. We investigate the computational suitability of various … Read more

Weak Stationarity: Eliminating the Gap between Necessary and Sufficient Conditions

Starting from known necessary extremality conditions in terms of strict subdifferentials and normals the notion of weak stationarity is introduced. It is defined in terms of initial space elements. The necessary conditions become necessary and sufficient (for stationarity). CitationSchool of Information Technology and Mathematical Sciences, Centre of Information and Applied Optimization, University of Ballarat, POB … Read more

Convergence rate estimates for the gradient differential inclusion

Let f be a lower semi-continuous convex function in a Euclidean space, finite or infinite dimensional. The gradient differential inclusion is a generalized differential equation which requires that -x'(t) be in the subgradient of f at x. It is the continuous versions of the gradient descent method for minimizing f in case f is differentiable, … Read more