Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Distributionally robust optimization through the lens of submodularity

Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We show that the sharpest upper bound on the … Read more

From Optimization to Control: Quasi Policy Iteration

Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we adopt the quasi-Newton method (QNM) from convex optimization to introduce a novel control algorithm coined as quasi-policy iteration (QPI). In particular, QPI is based on a novel approximation of the “Hessian” matrix … Read more

Exact Matrix Completion via High-Rank Matrices in Sum-of-Squares Relaxations

We study exact matrix completion from partially available data with hidden connectivity patterns. Exact matrix completion was shown to be possible recently by Cosse and Demanet in 2021 with Lasserre’s relaxation using the trace of the variable matrix as the objective function with given data structured in a chain format. In this study, we introduce … Read more

Higher-Order Newton Methods with Polynomial Work per Iteration

We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish to … Read more

Efficient Approximation Quality Computation for Sandwiching Algorithms for Convex Multicriteria Optimization

Computing the approximation quality is a crucial step in every iteration of Sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Self-concordant Smoothing for Large-Scale Convex Composite Optimization

We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other may be nonsmooth. The key highlight of our approach is in a natural property of the resulting problem’s structure which provides us with a variable-metric selection method and a step-length selection rule … Read more

Fast convergence of inertial primal-dual dynamics and algorithms for a bilinearly coupled saddle point problem

This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretization associated with a continuously differentiable bilinearly coupled convex-concave saddle point problem. First, we consider the second-order dynamical system with asymptotically vanishing damping term and show the existence and uniqueness of the trajectories as global twice continuously differentiable … Read more

Affine FR : an effective facial reduction algorithm for semidefinite relaxations of combinatorial problems

We develop a new method called \emph{affine FR} for recovering Slater’s condition for semidefinite programming (SDP) relaxations of combinatorial optimization (CO) problems. Affine FR is a user-friendly method, as it is fully automatic and only requires a description of the problem. We provide a rigorous analysis of differences between affine FR and the existing methods. … Read more