The convergence rate of the Sandwiching algorithm for convex bounded multiobjective optimization

Sandwiching algorithms, also known as Benson-type algorithms, approximate the nondominated set of convex bounded multiobjective optimization problems by constructing and iteratively improving polyhedral inner and outer approximations. Using a set-valued metric, an estimate of the approximation quality is determined as the distance between the inner and outer approximation. The convergence of the algorithm is evaluated … Read more

Accelerated Gradient Dynamics on Riemannian Manifolds: Faster Rate and Trajectory Convergence

In order to minimize a differentiable geodesically convex function, we study a second-order dynamical system on Riemannian manifolds with an asymptotically vanishing damping term of the form \(\alpha/t\). For positive values of \(\alpha\), convergence rates for the objective values and convergence of trajectory is derived. We emphasize the crucial role of the curvature of the … Read more

Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Distributionally robust optimization through the lens of submodularity

Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We show that the sharpest upper bound on the … Read more

From Optimization to Control: Quasi Policy Iteration

Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we adopt the quasi-Newton method (QNM) from convex optimization to introduce a novel control algorithm coined as quasi-policy iteration (QPI). In particular, QPI is based on a novel approximation of the “Hessian” matrix … Read more

Exact Matrix Completion via High-Rank Matrices in Sum-of-Squares Relaxations

We study exact matrix completion from partially available data with hidden connectivity patterns. Exact matrix completion was shown to be possible recently by Cosse and Demanet in 2021 with Lasserre’s relaxation using the trace of the variable matrix as the objective function with given data structured in a chain format. In this study, we introduce … Read more

Higher-Order Newton Methods with Polynomial Work per Iteration

We present generalizations of Newton’s method that incorporate derivatives of an arbitrary order \(d\) but maintain a polynomial dependence on dimension in their cost per iteration. At each step, our \(d^{\text{th}}\)-order method uses semidefinite programming to construct and minimize a sum of squares-convex approximation to the \(d^{\text{th}}\)-order Taylor expansion of the function we wish to … Read more

Efficient Approximation Quality Computation for Sandwiching Algorithms for Convex Multicriteria Optimization

Computing the approximation quality is a crucial step in every iteration of Sandwiching algorithms (also called Benson-type algorithms) used for the approximation of convex Pareto fronts, sets or functions. Two quality indicators often used in these algorithms are polyhedral gauge and epsilon indicator. In this article, we develop an algorithm to compute the polyhedral gauge … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Self-concordant smoothing in proximal quasi-Newton algorithms for large-scale convex composite optimization

We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other nonsmooth. The key highlight is a natural property of the resulting problem’s structure that yields a variable metric selection method and a step length rule especially suited to proximal quasi-Newton algorithms. Also, … Read more