Efficient Computation of the Approximation Quality in Sandwiching Algorithms

Computing the approximation quality is a crucial step in every iteration of Sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge … Read more

Analysis of a Class of Minimization Problems Lacking Lower Semicontinuity

The minimization of non-lower semicontinuous functions is a difficult topic that has been minimally studied. Among such functions is a Heaviside composite function that is the composition of a Heaviside function with a possibly nonsmooth multivariate function. Unifying a statistical estimation problem with hierarchical selection of variables and a sample average approximation of composite chance … Read more

Preconditioning for Generelized Jacobians with the ω-Condition Number

Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the ω-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

Error estimate for regularized optimal transport problems via Bregman divergence

Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Self-concordant Smoothing for Large-Scale Convex Composite Optimization

\(\) We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other may be nonsmooth. The key highlight of our approach is in a natural property of the resulting problem’s structure which provides us with a variable-metric selection method and a step-length selection … Read more

Fast convergence of inertial primal-dual dynamics and algorithms for a bilinearly coupled saddle point problem

This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretization associated with a continuously differentiable bilinearly coupled convex-concave saddle point problem. First, we consider the second-order dynamical system with asymptotically vanishing damping term and show the existence and uniqueness of the trajectories as global twice continuously differentiable … Read more

Affine FR : an effective facial reduction algorithm for semidefinite relaxations of combinatorial problems

\(\) We develop a new method called \emph{affine FR} for recovering Slater’s condition for semidefinite programming (SDP) relaxations of combinatorial optimization (CO) problems. Affine FR is a user-friendly method, as it is fully automatic and only requires a description of the problem. We provide a rigorous analysis of differences between affine FR and the existing … Read more

A generalized asymmetric forward-backward-adjoint algorithm for convex-concave saddle-point problem

The convex-concave minimax problem, known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint (G-AFBA) algorithm to solve such a problem by utilizing both the proximal techniques and the interactive information of primal-dual updates. Except enjoying … Read more