On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming

In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points … Read more

Amenable cones: error bounds without constraint qualifications

We provide a framework for obtaining error bounds for linear conic problems without assuming constraint qualifications or regularity conditions. The key aspects of our approach are the notions of amenable cones and facial residual functions. For amenable cones, it is shown that error bounds can be expressed as a composition of facial residual functions. The … Read more

A Primal-Dual Lifting Scheme for Two-Stage Robust Optimization

Two-stage robust optimization problems, in which decisions are taken both in anticipation of and in response to the observation of an unknown parameter vector from within an uncertainty set, are notoriously challenging. In this paper, we develop convergent hierarchies of primal (conservative) and dual (progressive) bounds for these problems that trade off the competing goals … Read more

Inner Conditions for Error Bounds and Metric Subregulerity of Multifunctions

We introduce a new class of sets, functions and multifunctions which is shown to be large and to enjoy some nice common properties with the convex setting. Error bounds for objects attached to this class are characterized in terms of inner conditions of Abadie’s type, that is conditions bearing on normal cones and coderivatives at … Read more

Discrete Approximation of Two-Stage Stochastic and Distributionally Robust Linear Complementarity Problems

In this paper, we propose a discretization scheme for the two-stage stochastic linear complementarity problem (LCP) where the underlying random data are continuously distributed. Under some moderate conditions, we derive qualitative and quantitative convergence for the solutions obtained from solving the discretized two-stage stochastic LCP (SLCP). We ex- plain how the discretized two-stage SLCP may … Read more

Error bounds for rank constrained optimization problems and applications

This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set $\mathcal{R}=\!\big\{X\in\mathbb{X}\ |\ {\rm rank}(X)\le \kappa\big\}$ and a closed convex set $\Omega$. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any $X\in \Omega$ ($X\in\mathbb{X}$) to the feasible set and … Read more

Robust Dual Dynamic Programming

Multi-stage robust optimization problems, where the decision maker can dynamically react to consecutively observed realizations of the uncertain problem parameters, pose formidable theoretical and computational challenges. As a result, the existing solution approaches for this problem class typically determine subopti- mal solutions under restrictive assumptions. In this paper, we propose a robust dual dynamic programming … Read more

Error bounds for nonlinear semidefinite optimization

In this paper, error bounds for nonlinear semidefinite optimization problem is considered. We assume the second order sufficient condition, the strict complementarity condition and the MFCQ condition at the KKT point. The nondegeneracy condition is not assumed in this paper. Therefore the Jacobian operator of the equality part of the KKT conditions is not assumed … Read more

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a … Read more

Tight global linear convergence rate bounds for operator splitting methods

In this paper we establish necessary and sufficient conditions for linear convergence of operator splitting methods for a general class of convex optimization problems where the associated fixed-point operator is averaged. We also provide a tight bound on the achievable convergence rate. Most existing results establishing linear convergence in such methods require restrictive assumptions regarding … Read more