Bregman Regularized Proximal Point Methods for Computing Projected Solutions of Quasi-equilibrium Problems

In this paper, we propose two Bregman regularized proximal point methods that provide flexibility to compute projected solutions for quasi-equilibrium problems. Each method has one Bregman projection onto the feasible set and the regularized equilibrium problem. Under standard assumptions, we prove that the methods are well-defined and that the sequences they generate converge to a … Read more

Unifying restart accelerated gradient and proximal bundle methods

This paper presents a novel restarted version of Nesterov’s accelerated gradient method and establishes its optimal iteration-complexity for solving convex smooth composite optimization problems. The proposed restart accelerated gradient method is shown to be a specific instance of the accelerated inexact proximal point framework introduced in “An accelerated hybrid proximal extragradient method for convex optimization … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster … Read more