Variable metric proximal stochastic gradient methods with additional sampling

Regularized empirical risk minimization problems arise in a variety of applications, including machine learning, signal processing, and image processing. Proximal stochastic gradient algorithms are a standard approach to solve these problems due to their low computational cost per iteration and a relatively simple implementation. This paper introduces a class of proximal stochastic gradient methods built … Read more

On the first order optimization methods in Deep Image Prior

Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more

A line search based proximal stochastic gradient algorithm with dynamical variance reduction

Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term … Read more

Spectral properties of Barzilai-Borwein rules in solving singly linearly constrained optimization problems subject to lower and upper bounds

In 1988, Barzilai and Borwein published a pioneering paper which opened the way to inexpensively accelerate first-order methods. More in detail, in the framework of unconstrained optimization, Barzilai and Borwein developed two strategies to select the steplength in gradient descent methods with the aim of encoding some second-order information of the problem without computing and/or … Read more