Solving separable convex optimization problems: Faster prediction-correction framework

He and Yuan’s prediction-correction framework [SIAM J. Numer. Anal. 50: 700-709, 2012] is able to provide convergent algorithms for solving separable convex optimization problems at a rate of $O(1/t)$ ($t$ represents iteration times) in both ergodic (the average of iteration) and pointwise senses. This paper presents a faster prediction-correction framework at a rate of $O(1/t)$ … Read more

Computational Guarantees for Restarted PDHG for LP based on “Limiting Error Ratios” and LP Sharpness

In recent years, there has been growing interest in solving linear optimization problems – or more simply “LP” – using first-order methods in order to avoid the costly matrix factorizations of traditional methods for huge-scale LP instances. The restarted primal-dual hybrid gradient method (PDHG) – together with some heuristic techniques – has emerged as a … Read more

On the Relation Between LP Sharpness and Limiting Error Ratio and Complexity Implications for Restarted PDHG

There has been a recent surge in development of first-order methods (FOMs) for solving huge-scale linear programming (LP) problems. The attractiveness of FOMs for LP stems in part from the fact that they avoid costly matrix factorization computation. However, the efficiency of FOMs is significantly influenced – both in theory and in practice – by … Read more

Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

We establish the optimal ergodic convergence rate for the classical projected subgradient method with time-varying step-sizes. This convergence rate remains the same even if we slightly increase the weight of the most recent points, thereby relaxing the ergodic sense. ArticleDownload View PDF

Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The structured saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting algorithm with loopless variance-reduction for solving this generic problem. We first prove the weak almost sure convergence of the iterates. We then demonstrate that our algorithm achieves … Read more

The convergence rate of the Sandwiching algorithm for convex bounded multiobjective optimization

Sandwiching algorithms, also known as Benson-type algorithms, approximate the nondominated set of convex bounded multiobjective optimization problems by constructing and iteratively improving polyhedral inner and outer approximations. Using a set-valued metric, an estimate of the approximation quality is determined as the distance between the inner and outer approximation. The convergence of the algorithm is evaluated … Read more

Accelerated Gradient Dynamics on Riemannian Manifolds: Faster Rate and Trajectory Convergence

In order to minimize a differentiable geodesically convex function, we study a second-order dynamical system on Riemannian manifolds with an asymptotically vanishing damping term of the form \(\alpha/t\). For positive values of \(\alpha\), convergence rates for the objective values and convergence of trajectory is derived. We emphasize the crucial role of the curvature of the … Read more

Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Distributionally robust optimization through the lens of submodularity

Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We show that the sharpest upper bound on the … Read more

From Optimization to Control: Quasi Policy Iteration

Recent control algorithms for Markov decision processes (MDPs) have been designed using an implicit analogy with well-established optimization algorithms. In this paper, we adopt the quasi-Newton method (QNM) from convex optimization to introduce a novel control algorithm coined as quasi-policy iteration (QPI). In particular, QPI is based on a novel approximation of the “Hessian” matrix … Read more