Minimizing convex quadratics with variable precision Krylov methods

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients and a Full-Orthormalization methods are derived, the necessary quantities occurring in the theoretical bounds estimated and new practical algorithms derived. Numerical experiments suggest that the new methods have significant … Read more

The Cyclic Douglas-Rachford Algorithm with r-sets-Douglas-Rachford Operators

The Douglas-Rachford (DR) algorithm is an iterative procedure that uses sequential reflections onto convex sets and which has become popular for convex feasibility problems. In this paper we propose a structural generalization that allows to use r-sets-DR operators in a cyclic fashion. We prove convergence and present numerical illustrations of the potential advantage of such … Read more

Efficient Solution of Maximum-Entropy Sampling Problems

We consider a new approach for the maximum-entropy sampling problem (MESP) that is based on bounds obtained by maximizing a function of the form ldet M(x) over linear constraints, where M(x)is linear in the n-vector x. These bounds can be computed very efficiently and are superior to all previously known bounds for MESP on most … Read more

On positive duality gaps in semidefinite programming

We study semidefinite programs (SDPs) with positive duality gaps, i.e., different optimal values in the primal and dual problems. the primal and dual problems differ. These SDPs are considered extremely pathological, they are often unsolvable, and they also serve as models of more general pathological convex programs. We first fully characterize two variable SDPs with … Read more

On the Complexity of Detecting Convexity over a Box

It has recently been shown that the problem of testing global convexity of polynomials of degree four is {strongly} NP-hard, answering an open question of N.Z. Shor. This result is minimal in the degree of the polynomial when global convexity is of concern. In a number of applications however, one is interested in testing convexity … Read more

The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does … Read more

Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator

We introduce a distributionally robust maximum likelihood estimation model with a Wasserstein ambiguity set to infer the inverse covariance matrix of a p-dimensional Gaussian random vector from n independent samples. The proposed model minimizes the worst case (maximum) of Stein’s loss across all normal reference distributions within a prescribed Wasserstein distance from the normal distribution … Read more

A family of spectral gradient methods for optimization

We propose a family of spectral gradient methods, whose stepsize is determined by a convex combination of the short Barzilai-Borwein (BB) stepsize and the long BB stepsize. It is shown that each member of the family shares certain quasi-Newton property in the sense of least squares. The family also includes some other gradient methods as … Read more

Robust-to-Dynamics Optimization

A robust-to-dynamics optimization (RDO) problem} is an optimization problem specified by two pieces of input: (i) a mathematical program (an objective function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and a feasible set $\Omega\subseteq\mathbb{R}^n$), and (ii) a dynamical system (a map $g:\mathbb{R}^n\rightarrow\mathbb{R}^n$). Its goal is to minimize $f$ over the set $\mathcal{S}\subseteq\Omega$ of initial conditions that forever remain in $\Omega$ under … Read more

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more