Residuals-based distributionally robust optimization with covariate information

We consider data-driven approaches that integrate a machine learning prediction model within distributionally robust optimization (DRO) given limited joint observations of uncertain parameters and covariates. Our framework is flexible in the sense that it can accommodate a variety of regression setups and DRO ambiguity sets. We investigate asymptotic and finite sample properties of solutions obtained … Read more

Convergence analysis under consistent error bounds

We introduce the notion of consistent error bound functions which provides a unifying framework for error bounds for multiple convex sets. This framework goes beyond the classical Lipschitzian and Holderian error bounds and includes logarithmic and entropic error bound found in the exponential cone. It also includes the error bounds obtainable under the theory of … Read more

Expected complexity analysis of stochastic direct-search

This work presents the convergence rate analysis of stochastic variants of the broad class of direct-search methods of directional type. It introduces an algorithm designed to optimize differentiable objective functions $f$ whose values can only be computed through a stochastically noisy blackbox. The proposed stochastic directional direct-search (SDDS) algorithm accepts new iterates by imposing a … Read more

A search direction inspired primal-dual method for saddle point problems

The primal-dual hybrid gradient algorithm (PDHG), which is indeed the Arrow-Hurwicz method, has been widely used in image processing areas. However, the convergence of PDHG was established only under some restrictive conditions in the literature, and it is still missing for the case without extra constraints. In this paper, from a perspective of the variational … Read more

Variable smoothing for convex optimization problems using stochastic gradients

We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal-dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with … Read more

A gradient type algorithm with backward inertial steps for a nonconvex minimization

We investigate an algorithm of gradient type with a backward inertial step in connection with the minimization of a nonconvex differentiable function. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satis es the Kurdyka-Lojasiewicz property. Further, we provide convergence rates for the … Read more

The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems

In this work, we show that for linearly constrained optimization problems the primal-dual hybrid gradient algorithm, analyzed by Chambolle and Pock [3], can be written as an entirely primal algorithm. This allows us to prove convergence of the iterates even in the degenerate cases when the linear system is inconsistent or when the strong duality … Read more

Convergence rates for an inertial algorithm of gradient type associated to a smooth nonconvex minimization

We investigate an inertial algorithm of gradient type in connection with the minimization of a nonconvex differentiable function. The algorithm is formulated in the spirit of Nesterov’s accelerated convex gradient method. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satis es the … Read more

Convergence Rates for Projective Splitting

Projective splitting is a family of methods for solving inclusions involving sums of maximal monotone operators. First introduced by Eckstein and Svaiter in 2008, these methods have enjoyed significant innovation in recent years, becoming one of the most flexible operator splitting frameworks available. While weak convergence of the iterates to a solution has been established, … Read more

Efficient Optimization Algorithms for Robust Principal Component Analysis and Its Variants

Robust PCA has drawn significant attention in the last decade due to its success in numerous application domains, ranging from bio-informatics, statistics, and machine learning to image and video processing in computer vision. Robust PCA and its variants such as sparse PCA and stable PCA can be formulated as optimization problems with exploitable special structures. … Read more