A Primal-Dual Lifting Scheme for Two-Stage Robust Optimization

Two-stage robust optimization problems, in which decisions are taken both in anticipation of and in response to the observation of an unknown parameter vector from within an uncertainty set, are notoriously challenging. In this paper, we develop convergent hierarchies of primal (conservative) and dual (progressive) bounds for these problems that trade off the competing goals … Read more

Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions

We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more

Discrete Approximation of Two-Stage Stochastic and Distributionally Robust Linear Complementarity Problems

In this paper, we propose a discretization scheme for the two-stage stochastic linear complementarity problem (LCP) where the underlying random data are continuously distributed. Under some moderate conditions, we derive qualitative and quantitative convergence for the solutions obtained from solving the discretized two-stage stochastic LCP (SLCP). We ex- plain how the discretized two-stage SLCP may … Read more

Error bounds for rank constrained optimization problems and applications

This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set $\mathcal{R}=\!\big\{X\in\mathbb{X}\ |\ {\rm rank}(X)\le \kappa\big\}$ and a closed convex set $\Omega$. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any $X\in \Omega$ ($X\in\mathbb{X}$) to the feasible set and … Read more

Robust Dual Dynamic Programming

Multi-stage robust optimization problems, where the decision maker can dynamically react to consecutively observed realizations of the uncertain problem parameters, pose formidable theoretical and computational challenges. As a result, the existing solution approaches for this problem class typically determine subopti- mal solutions under restrictive assumptions. In this paper, we propose a robust dual dynamic programming … Read more

Error bounds for nonlinear semidefinite optimization

In this paper, error bounds for nonlinear semidefinite optimization problem is considered. We assume the second order sufficient condition, the strict complementarity condition and the MFCQ condition at the KKT point. The nondegeneracy condition is not assumed in this paper. Therefore the Jacobian operator of the equality part of the KKT conditions is not assumed … Read more

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a … Read more

Tight global linear convergence rate bounds for operator splitting methods

In this paper we establish necessary and sufficient conditions for linear convergence of operator splitting methods for a general class of convex optimization problems where the associated fixed-point operator is averaged. We also provide a tight bound on the achievable convergence rate. Most existing results establishing linear convergence in such methods require restrictive assumptions regarding … Read more

Error bounds, quadratic growth, and linear convergence of proximal methods

We show that the the error bound property, postulating that the step lengths of the proximal gradient method linearly bound the distance to the solution set, is equivalent to a standard quadratic growth condition. We exploit this equivalence in an analysis of asymptotic linear convergence of the proximal gradient algorithm for structured problems, which lack … Read more

Gap functions for quasi-equilibria

An approach for solving quasi-equilibrium problems (QEPs) is proposed relying on gap functions, which allow reformulating QEPs as global optimization problems. The (generalized) smoothness properties of a gap function are analysed and an upper estimates of its Clarke directional derivative is given. Monotonicity assumptions on both the equilibrium and constraining bifunctions are a key tool … Read more