Implicit Multifunction Theorems in complete metric spaces

In this paper, we establish some new characterizations of the metric regularity of implicit multifunctions in complete metric spaces by using the lower semicontinuous envelopes of the distance functions for set-valued mappings. Through these new characterizations it is possible to investigate implicit multifunction theorems based on coderivatives and on contingent derivatives as well as the … Read more

Stability of error bounds for convex constraint systems in Banach spaces

This paper studies stability of error bounds for convex constraint systems in Banach spaces. We show that certain known sufficient conditions for local and global error bounds actually ensure error bounds for the family of functions being in a sense small perturbations of the given one. A single inequality as well as semi-infinite constraint systems … Read more

Stability of error bounds for semi-infinite convex constraint systems

In this paper, we are concerned with the stability of the error bounds for semi-infinite convex constraint systems. Roughly speaking, the error bound of a system of inequalities is said to be stable if all its “small” perturbations admit a (local or global) error bound. We first establish subdifferential characterizations of the stability of error … Read more

Homogeneous Cone Complementarity Problems and $ Properties

We consider existence and uniqueness properties of a solution to homogeneous cone complementarity problem (HCCP). Employing the $T$-algebraic characterization of homogeneous cones, we generalize the $P, P_0, R_0$ properties for a nonlinear function associated with the standard nonlinear complementarity problem to the setting of HCCP. We prove that if a continuous function has either the … Read more

Primal and dual linear decision rules in stochastic and robust optimization

Linear stochastic programming provides a flexible toolbox for analyzing real-life decision situations, but it can become computationally cumbersome when recourse decisions are involved. The latter are usually modelled as decision rules, i.e., functions of the uncertain problem data. It has recently been argued that stochastic programs can quite generally be made tractable by restricting the … Read more

A Coordinate Gradient Descent Method for Linearly Constrained Smooth Optimization and Support Vector Machines Training

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. … Read more

A Coordinate Gradient Descent Method for Nonsmooth Separable Minimization

We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with l_1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian … Read more

Sum of Squares Method for Sensor Network Localization

We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of this polynomial, we propose a new structured SOS relaxation, … Read more

Analyticity of weighted central path and error bound for semidefinite programming

The purpose of this paper is two-fold. Firstly, we show that every Cholesky-based weighted central path for semidefinite programming is analytic under strict complementarity. This result is applied to homogeneous cone programming to show that the central paths defined by the known class of optimal self-concordant barriers are analytic in the presence of strictly complementary … Read more

Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems

We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges {\em globally\/} linearly to zero. This property is viewed as a consequence of the … Read more