The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does … Read more

Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator

We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst case (maximum) of Stein’s loss across all normal reference distributions within a prescribed Wasserstein distance from the normal distribution … Read more

A family of spectral gradient methods for optimization

We propose a family of spectral gradient methods, whose stepsize is determined by a convex combination of the short Barzilai-Borwein (BB) stepsize and the long BB stepsize. It is shown that each member of the family shares certain quasi-Newton property in the sense of least squares. The family also includes some other gradient methods as … Read more

Robust-to-Dynamics Optimization

A robust-to-dynamics optimization (RDO) problem} is an optimization problem specified by two pieces of input: (i) a mathematical program (an objective function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and a feasible set $\Omega\subseteq\mathbb{R}^n$), and (ii) a dynamical system (a map $g:\mathbb{R}^n\rightarrow\mathbb{R}^n$). Its goal is to minimize $f$ over the set $\mathcal{S}\subseteq\Omega$ of initial conditions that forever remain in $\Omega$ under … Read more

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more

Lectures on Parametric Optimization: An Introduction

The report aims to provide an overview over results from Parametric Optimization which could be called classical results on the subject. Parametric Optimization considers optimization problems depending on a parameter and describes how the feasible set, the value function, and the local or global minimizers of the program depend on changes in the parameter. After … Read more

An algorithm to compute the Hoffman constant of a system of linear constraints

We propose a combinatorial algorithm to compute the Hoffman constant of a system of linear equations and inequalities. The algorithm is based on a characterization of the Hoffman constant as the largest of a finite canonical collection of easy-to-compute Hoffman constants. Our algorithm and characterization extend to the more general context where some of the … Read more

Fast Multilevel Algorithms for Compressive Principle Component Pursuit

Recovering a low-rank matrix from highly corrupted measurements arises in compressed sensing of structured high-dimensional signals (e.g., videos and hyperspectral images among others). Robust principal component analysis (RPCA), solved via principal component pursuit (PCP), recovers a low-rank matrix from sparse corruptions that are of unknown value and support by decomposing the observation matrix into two … Read more

Primal-Dual Interior-Point Methods for Domain-Driven Formulations: Algorithms

We study infeasible-start primal-dual interior-point methods for convex optimization problems given in a typically natural form we denote as Domain-Driven formulation. Our algorithms extend many advantages of primal-dual interior-point techniques available for conic formulations, such as the current best complexity bounds, and more robust certificates of approximate optimality, unboundedness, and infeasibility, to Domain-Driven formulations. The … Read more

An algorithm for solving infinite horizon Markov dynamic programmes

We consider a general class of infinite horizon dynamic programmes where state and control sets are convex and compact subsets of Euclidean spaces and (convex) costs are discounted geometrically. The aim of this work is to provide a convergence result for these problems under as few restrictions as possible. Under certain assumptions on the cost … Read more