An incremental method for solving convex finite minmax problems

We introduce a new approach to minimizing a function defined as the pointwise maximum over finitely many convex real functions (next referred to as the “component functions”), with the aim of working on the basis of “incomplete knowledge” of the objective function. In fact, a descent algorithm is proposed which does not necessarily require at … Read more

Pattern Search Method for Discrete L_1 – Approximation

We propose a pattern search method to solve a classical nonsmooth optimization problem. In a deep analogy with pattern search methods for linear constrained optimization, the set of search directions at each iteration is defined in such a way that it conforms to the local geometry of the set of points of nondifferentiability near the … Read more

Finding optimal algorithmic parameters using a mesh adaptive direct search

The objectives of this paper are twofold; we first demonstrate the flexibility of the mesh adaptive direct search (MADS) in identifying locally optimal algorithmic parameters. This is done by devising a general framework for parameter tuning. The framework makes provision for surrogate objectives. Parameters are sought so as to minimize some measure of performance of … Read more

Convergence of a hybrid projection-proximal point algorithm coupled with approximation methods in convex optimization

In order to minimize a closed convex function that is approximated by a sequence of better behaved functions, we investigate the global convergence of a generic diagonal hybrid algorithm, which consists of an inexact relaxed proximal point step followed by a suitable orthogonal projection onto a hyperplane. The latter permits to consider a fixed relative … Read more

Best approximation to common fixed points of a semigroup of nonexpansive operators

We study a sequential algorithm for finding the projection of a given point onto the common fixed points set of a semigroup of nonexpansive operators in Hilbert space. The convergence of such an algorithm was previously established only for finitely many nonexpansive operators. Algorithms of this kind have been applied to the best approximation and … Read more

Proximal-ACCPM: a versatile oracle based optimization method

Oracle Based Optimization (OBO) conveniently designates an approach to handle a class of convex optimization problems in which the information pertaining to the function to be minimized and/or to the feasible set takes the form of a linear outer approximation revealed by an oracle. We show, through three representative examples, how difficult problems can be … Read more

On the Convergence of Successive Linear-Quadratic Programming Algorithms

The global convergence properties of a class of penalty methods for nonlinear programming are analyzed. These methods include successive linear programming approaches, and more specifically, the successive linear-quadratic programming approach presented by Byrd, Gould, Nocedal and Waltz (Math. Programming 100(1):27–48, 2004). Every iteration requires the solution of two trust-region subproblems involving piecewise linear and quadratic … Read more

An Algorithm for Perturbed Second-order Cone Programs

The second-order cone programming problem is reformulated into several new systems of nonlinear equations. Assume the perturbation of the data is in a certain neighborhood of zero. Then starting from a solution to the old problem, the semismooth Newton’s iterates converge Q-quadratically to a solution of the perturbed problem. The algorithm is globalized. Numerical examples … Read more

The Q Method for Symmetric Cone Programming

We extend the Q method to the symmetric cone programming. An infeasible interior point algorithm and a Newton-type algorithm are given. We give convergence results of the interior point algorithm and prove that the Newton-type algorithm is good for CitationAdvOl-Report#2004/18 McMaster University, Advanced Optimization Laboratory Hamilton, Ontario, Canada October 2004ArticleDownload View PDF

A New Conjugate Gradient Algorithm Incorporating Adaptive Ellipsoid Preconditioning

The conjugate gradient (CG) algorithm is well-known to have excellent theoretical properties for solving linear systems of equations $Ax = b$ where the $n\times n$ matrix $A$ is symmetric positive definite. However, for extremely ill-conditioned matrices the CG algorithm performs poorly in practice. In this paper, we discuss an adaptive preconditioning procedure which improves the … Read more