^phBcnorms, log-barriers and Cramer transform in optimization

We show that the Laplace approximation of a supremum by $L^p$-norms has interesting consequences in optimization. For instance, the logarithmic barrier functions (LBF) of a primal convex problem $P$ and its dual $P^*$ appear naturally when using this simple approximation technique for the value function $g$ of $P$ or its Legendre-Fenchel conjugate $g^*$. In addition, … Read more

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

We consider sequential quadratic programming (SQP) methods for solving constrained nonlinear programming problems. It is generally believed that SQP methods are sensitive to the accuracy by which partial derivatives are provided. One reason is that differences of gradients of the Lagrangian function are used for updating a quasi-Newton matrix, e.g., by the BFGS formula. The … Read more

A Feasible Directions Method for Nonsmooth Convex Optimization

We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and … Read more

On Duality Gap in Binary Quadratic Programming

We present in this paper new results on the duality gap between the binary quadratic optimization problem and its Lagrangian dual or semidefinite programming relaxation. We first derive a necessary and sufficient condition for the zero duality gap and discuss its relationship with the polynomial solvability of the primal problem. We then characterize the zeroness … Read more

On the convergence of a wide range of trust region methods for unconstrained optimization

We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates … Read more

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

A class of trust-region methods for large scale bound-constrained systems of nonlinear equations is presented. The methods in this class follow the so called affine-scaling approach and can efficiently handle large scale problems. At each iteration, a suitably scaled region around the current approximate solution is defined and, within such a region, the norm of … Read more

Constrained Dogleg Methods for nonlinear systems with simple bounds

We focus on the numerical solution of medium scale bound-constrained systems of nonlinear equations. In this context, we consider an affine-scaling trust region approach that allows a great flexibility in choosing the scaling matrix used to handle the bounds. The method is based on a dogleg procedure tailored for constrained problems and so, it is … Read more

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on … Read more

Metal Artefact Reduction by Least-Squares Penalized-Likelihood Reconstruction with a Fast Polychromatic Projection Model

We consider penalized-likelihood reconstruction for X-ray computed tomography of objects that contain small metal structures. To reduce the beam hardening artefacts induced by these structures, we derive the reconstruction algorithm from a projection model that takes into account the photon emission spectrum and nonlinear variation of attenuation to photon energy. This algorithm requires excessively long … Read more

A three-term conjugate gradient method with sufficient descent property for unconstrained optimization

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. … Read more