A line search based proximal stochastic gradient algorithm with dynamical variance reduction

Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term … Read more

Spectral properties of Barzilai-Borwein rules in solving singly linearly constrained optimization problems subject to lower and upper bounds

In 1988, Barzilai and Borwein published a pioneering paper which opened the way to inexpensively accelerate first-order methods. More in detail, in the framework of unconstrained optimization, Barzilai and Borwein developed two strategies to select the steplength in gradient descent methods with the aim of encoding some second-order information of the problem without computing and/or … Read more

Steplength selection in gradient projection methods for box-constrained quadratic programs

The role of the steplength selection strategies in gradient methods has been widely investigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the large-scale optimization problems arising in important real-world applications. Most … Read more

On the steplength selection in gradient methods for unconstrained optimization

The seminal paper by Barzilai and Borwein [IMA J. Numer. Anal. 8 (1988)] has given rise to an extensive investigation aimed at developing effective gradient methods, able to deal with large-scale optimization problems. Several steplength rules have been first designed for unconstrained quadratic problems and then extended to general nonlinear problems; these rules share the … Read more