The improvement function in branch-and-bound methods for complete global optimization

We present a new spatial branch-and-bound approach for treating optimization problems with nonconvex inequality constraints. It is able to approximate the set of all global minimal points in case of solvability, and else to detect infeasibility. The new technique covers the nonconvex constraints by means of an improvement function which, although nonsmooth, can be treated … Read more

The improvement function reformulation for graphs of minimal point mappings

Graphs of minimal point mappings of parametric optimization problems appear in the definition of feasible sets of bilevel optimization problems and of semi-infinite optimization problems, and the intersection of multiple such graphs defines (generalized) Nash equilibria. This paper shows how minimal point graphs of nonconvex parametric optimization problems can be written with the help of … Read more

Extending Exact Convex Relaxations of Quadratically Constrained Quadratic Programs

A convex relaxation of a quadratically constrained quadratic program (QCQP) is called exact if it has a rank-$1$ optimal solution that corresponds to an optimal solution of the  QCQP. Given a QCQP whose convex relaxation is exact,  this paper investigates the incorporation of additional quadratic inequality constraints under a non-intersecting quadratic constraint condition while maintaining … Read more

New Algorithms for maximizing the difference of convex functions

Maximizing the difference of 2 convex functions over a convex feasible set (the so called DCA problem) is a hard problem. There is a large number of publications addressing this problem. Many of them are variations of widely used DCA algorithm [20]. The success of this algorithm to reach a good approximation of a global … Read more

The Least Singular Value Function in Variational Analysis

Metric regularity is among the central concepts of nonlinear and variational analysis, constrained optimization, and their numerous applications. However, met- ric regularity can be elusive for some important ill-posed classes of problems includ- ing polynomial equations, parametric variational systems, smooth reformulations of complementarity systems with degenerate solutions, etc. The study of stability issues for such … Read more

New Sufficient and Necessary Conditions for Constrained and Unconstrained Lipschitzian Error Bounds

Local error bounds play a fundamental role in mathematical programming and variational analysis. They are used e.g. as constraint qualifications in optimization, in developing calculus rules for generalized derivatives in nonsmooth and set-valued analysis, and they serve as a key ingredient in the design and convergence analysis of Newton-type methods for solving systems of possibly … Read more

On Sum-Rules for Second-Order Contingent Derivatives

We are concerned with contingent derivatives and their second-order counterparts (introduced by Ngai et al.) of set-valued mappings. Special attention is given to the development of new sum-rules for second-order contingent derivatives. To be precise, we want to find conditions under which the second-order contingent derivative of the sum of a smooth and a set-valued … Read more

Constructing QCQP Instances Equivalent to Their SDP Relaxations

General quadratically constrained quadratic programs (QCQPs) are challenging to solve as they are known to be NP-hard. A popular approach to approximating QCQP solutions is to use semidefinite programming (SDP) relaxations. It is well-known that the optimal value η of the SDP relaxation problem bounds the optimal value ζ  of the QCQP from below, i.e., … Read more

Variable metric proximal stochastic gradient methods with additional sampling

Regularized empirical risk minimization problems arise in a variety of applications, including machine learning, signal processing, and image processing. Proximal stochastic gradient algorithms are a standard approach to solve these problems due to their low computational cost per iteration and a relatively simple implementation. This paper introduces a class of proximal stochastic gradient methods built … Read more

Extended Triangle Inequalities for Nonconvex Box-Constrained Quadratic Programming

Let Box_n = {x in R^n : 0<= x <= e }, and let QPB_n denote the convex hull of {(1, x’)'(1, x’) : x  in Box_n}. The quadratic programming problem min{x’Q x + q’x : x in Box_n} where Q is not positive semidefinite (PSD), is equivalent to a linear optimization problem over QPB_n … Read more