A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization

Let $f$ be a continuous function on $\Rl^n$, and suppose $f$ is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which $f$ is not differentiable. Of particular interest is the case where $f$ is not convex, and perhaps not even locally Lipschitz, but … Read more

The Bundle Method in Combinatorial Optimization

We propose a dynamic version of the bundle method to get approximate solutions to semidefinite programs with a nearly arbitrary number of linear inequalities. Our approach is based on Lagrangian duality, where the inequalities are dualized, and only a basic set of constraints is maintained explicitly. This leads to function evaluations requiring to solve a … Read more

A masked spectral bound for maximum-entropy sampling

We introduce a new masked spectral bound for the maximum-entropy sampling problem. This bound is a continuous generalization of the very effective spectral partition bound. Optimization of the masked spectral bound requires the minimization of a nonconvex, nondifferentiable function over a semidefiniteness constraint. We describe a nonlinear affine scaling algorithm to approximately minimize the bound. … Read more

On the block-structured distance to non-surjectivity of sublinear mappings

We show that the block-structured distance to non-surjectivity of a set-valued sublinear mapping equals the reciprocal of a suitable block-structured norm of its inverse. This gives a natural generalization of the classical Eckart and Young identity for square invertible matrices. CitationMathematical Programming 103 (2005) pp. 561–573.

Characterizations of error bounds for lower semicontinuous functions on metric spaces

By using a variational method based on Ekeland’s principle, we give characterizations of the existence of so-called global and local error bounds, for lower semicontinuous functions defined on complete metric spaces. We thus provide a systematic and synthetic approach to the subject, emphasizing the special case of convex functions defined on arbitrary Banach spaces, and … Read more

On the Convergence of Successive Linear Programming Algorithms

We analyze the global convergence properties of a class of penalty methods for nonlinear programming. These methods include successive linear programming approaches, and more specifically the SLP-EQP approach presented in \cite{ByrdGoulNoceWalt02}. Every iteration requires the solution of two trust region subproblems involving linear and quadratic models, respectively. The interaction between the trust regions of these … Read more

The mathematics of eigenvalue optimization

Optimization problems involving the eigenvalues of symmetric and nonsymmetric matrices present a fascinating mathematical challenge. Such problems arise often in theory and practice, particularly in engineering design, and are amenable to a rich blend of classical mathematical techniques and contemporary optimization theory. This essay presents a personal choice of some central mathematical ideas, outlined for … Read more

Nonsmooth Matrix Valued Functions Defined by Singular Values

A class of matrix valued functions defined by singular values of nonsymmetric matrices is shown to have many properties analogous to matrix valued functions defined by eigenvalues of symmetric matrices. In particular, the (smoothed) matrix valued Fischer-Burmeister function is proved to be strongly semismooth everywhere. This result is also used to show the strong semismoothness … Read more

A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization

We introduce an algorithm to minimize a function of several variables with no convexity nor smoothness assumptions. The main peculiarity of our approach is the use of an the objective function model which is the difference of two piecewise affine convex functions. Bundling and trust region concepts are embedded into the algorithm. Convergence of the … Read more

Quadratic Convergence of a Squared Smoothing Newton Method for Nonsmooth Matrix Equations and Its Applications in Semidefinite Optimization Problems

We study a smoothing Newton method for solving a nonsmooth matrix equation that includes semidefinite programming and the semidefinte complementarity problem as special cases. This method, if specialized for solving semidefinite programs, needs to solve only one linear system per iteration and achieves quadratic convergence under strict complementarity. We also establish quadratic convergence of this … Read more